Search results for: stochastic averaging method
17594 Static and Dynamic Analysis of Timoshenko Microcantilever Using the Finite Element Method
Authors: Mohammad Tahmasebipour, Hosein Salarpour
Abstract:
Micro cantilevers are one of the components used in the manufacture of micro-electromechanical systems. Epoxy microcantilevers have a variety of applications in the manufacture of micro-sensors and micro-actuators. In this paper, the Timoshenko Micro cantilever was statically and dynamically analyzed using the finite element method. First, all boundary conditions and initial conditions governing micro cantilevers were considered. The effect of size on the deflection, angle of rotation, natural frequencies, and mode shapes were then analyzed and evaluated under different frequencies. It was observed that an increased micro cantilever thickness reduces the deflection, rotation, and resonant frequency. A good agreement was observed between our results and those obtained by the couple stress theory, the classical theory, and the strain gradient elasticity theory.Keywords: microcantilever, microsensor; epoxy, dynamic behavior, static behavior, finite element method
Procedia PDF Downloads 41617593 Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation
Authors: Bubai Maji, Monorama Swain
Abstract:
Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases.Keywords: Bi-GRU, 1D-CNNs, self-attention, speech emotion recognition
Procedia PDF Downloads 11317592 Numerical Investigation of Incompressible Turbulent Flows by Method of Characteristics
Authors: Ali Atashbar Orang, Carlo Massimo Casciola
Abstract:
A novel numerical approach for the steady incompressible turbulent flows is presented in this paper. The artificial compressibility method (ACM) is applied to the Reynolds Averaged Navier-Stokes (RANS) equations. A new Characteristic-Based Turbulent (CBT) scheme is developed for the convective fluxes. The well-known Spalart–Allmaras turbulence model is employed to check the effectiveness of this new scheme. Comparing the proposed scheme with previous studies, it is found that the present CBT scheme demonstrates accurate results, high stability and faster convergence. In addition, the local time stepping and implicit residual smoothing are applied as the convergence acceleration techniques. The turbulent flows past a backward facing step, circular cylinder, and NACA0012 hydrofoil are studied as benchmarks. Results compare favorably with those of other available schemes.Keywords: incompressible turbulent flow, method of characteristics, finite volume, Spalart–Allmaras turbulence model
Procedia PDF Downloads 41217591 Implementation of Real-Time Multiple Sound Source Localization and Separation
Authors: Jeng-Shin Sheu, Qi-Xun Zheng
Abstract:
This paper mainly discusses a method of separating speech when using a microphone array without knowing the number and direction of sound sources. In recent years, there have been many studies on the method of separating signals by using masking, but most of the separation methods must be operated under the condition of a known number of sound sources. Such methods cannot be used for real-time applications. In our method, this paper uses Circular-Integrated-Cross-Spectrum to estimate the statistical histogram distribution of the direction of arrival (DOA) to obtain the number of sound sources and sound in the mixed-signal Source direction. In calculating the relevant parameters of the ring integrated cross-spectrum, the phase (Phase of the Cross-Power Spectrum) and phase rotation factors (Phase Rotation Factors) calculated by the cross power spectrum of each microphone pair are used. In the part of separating speech, it uses the DOA weighting and shielding separation method to calculate the sound source direction (DOA) according to each T-F unit (time-frequency point). The weight corresponding to each T-F unit can be used to strengthen the intensity of each sound source from the T-F unit and reduce the influence of the remaining sound sources, thereby achieving voice separation.Keywords: real-time, spectrum analysis, sound source localization, sound source separation
Procedia PDF Downloads 15517590 Research on Measuring Operational Risk in Commercial Banks Based on Internal Control
Authors: Baobao Li
Abstract:
Operational risk covers all operations of commercial banks and has a close relationship with the bank’s internal control. But in the commercial banks' management practice, internal control is always separated from the operational risk measurement. With the increasing of operational risk events in recent years, operational risk is paid more and more attention by regulators and banks’ managements. The paper first discussed the relationship between internal control and operational risk management and used CVaR-POT model to measure operational risk, and then put forward a modified measurement method (to use operational risk assessment results to modify the measurement results of the CVaR-POT model). The paper also analyzed the necessity and rationality of this method. The method takes into consideration the influence of internal control, improves the accuracy and effectiveness of operational risk measurement and save the economic capital for commercial banks, avoiding the drawbacks of using some mainstream models one-sidedly.Keywords: commercial banks, internal control, operational risk, risk measurement
Procedia PDF Downloads 39817589 Identification of Soft Faults in Branched Wire Networks by Distributed Reflectometry and Multi-Objective Genetic Algorithm
Authors: Soumaya Sallem, Marc Olivas
Abstract:
This contribution presents a method for detecting, locating, and characterizing soft faults in a complex wired network. The proposed method is based on multi-carrier reflectometry MCTDR (Multi-Carrier Time Domain Reflectometry) combined with a multi-objective genetic algorithm. In order to ensure complete network coverage and eliminate diagnosis ambiguities, the MCTDR test signal is injected at several points on the network, and the data is merged between different reflectometers (sensors) distributed on the network. An adapted multi-objective genetic algorithm is used to merge data in order to obtain more accurate faults location and characterization. The proposed method performances are evaluated from numerical and experimental results.Keywords: wired network, reflectometry, network distributed diagnosis, multi-objective genetic algorithm
Procedia PDF Downloads 19417588 Discontinuous Galerkin Method for Higher-Order Ordinary Differential Equations
Authors: Helmi Temimi
Abstract:
In this paper, we study the super-convergence properties of the discontinuous Galerkin (DG) method applied to one-dimensional mth-order ordinary differential equations without introducing auxiliary variables. We found that nth−derivative of the DG solution exhibits an optimal O (hp+1−n) convergence rates in the L2-norm when p-degree piecewise polynomials with p≥1 are used. We further found that the odd-derivatives and the even derivatives are super convergent, respectively, at the upwind and downwind endpoints.Keywords: discontinuous, galerkin, superconvergence, higherorder, error, estimates
Procedia PDF Downloads 47817587 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions
Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani
Abstract:
The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.Keywords: estimation, bandwidth, mean square error, cumulative distribution function
Procedia PDF Downloads 58117586 Further Results on Modified Variational Iteration Method for the Analytical Solution of Nonlinear Advection Equations
Authors: A. W. Gbolagade, M. O. Olayiwola, K. O. Kareem
Abstract:
In this paper, further to our result on recent paper on the solution of nonlinear advection equations, we present further results on the nonlinear nonhomogeneous advection equations using a modified variational iteration method.Keywords: lagrange multiplier, non-homogeneous equations, advection equations, mathematics
Procedia PDF Downloads 30117585 Hydraulic Performance of Curtain Wall Breakwaters Based on Improved Moving Particle Semi-Implicit Method
Authors: Iddy Iddy, Qin Jiang, Changkuan Zhang
Abstract:
This paper addresses the hydraulic performance of curtain wall breakwaters as a coastal structure protection based on the particles method modelling. The hydraulic functions of curtain wall as wave barriers by reflecting large parts of incident waves through the vertical wall, a part transmitted and a particular part was dissipating the wave energies through the eddy flows formed beneath the lower end of the plate. As a Lagrangian particle, the Moving Particle Semi-implicit (MPS) method which has a robust capability for numerical representation has proven useful for design of structures application that concern free-surface hydrodynamic flow, such as wave breaking and overtopping. In this study, a vertical two-dimensional numerical model for the simulation of violent flow associated with the interaction between the curtain-wall breakwaters and progressive water waves is developed by MPS method in which a higher precision pressure gradient model and free surface particle recognition model were proposed. The wave transmission, reflection, and energy dissipation of the vertical wall were experimentally and theoretically examined. With the numerical wave flume by particle method, very detailed velocity and pressure fields around the curtain-walls under the action of waves can be computed in each calculation steps, and the effect of different wave and structural parameters on the hydrodynamic characteristics was investigated. Also, the simulated results of temporal profiles and distributions of velocity and pressure in the vicinity of curtain-wall breakwaters are compared with the experimental data. Herein, the numerical investigation of hydraulic performance of curtain wall breakwaters indicated that the incident wave is largely reflected from the structure, while the large eddies or turbulent flows occur beneath the curtain-wall resulting in big energy losses. The improved MPS method shows a good agreement between numerical results and analytical/experimental data which are compared to related researches. It is thus verified that the improved pressure gradient model and free surface particle recognition methods are useful for enhancement of stability and accuracy of MPS model for water waves and marine structures. Therefore, it is possible for particle method (MPS method) to achieve an appropriate level of correctness to be applied in engineering fields through further study.Keywords: curtain wall breakwaters, free surface flow, hydraulic performance, improved MPS method
Procedia PDF Downloads 14917584 Open Circuit MPPT Control Implemented for PV Water Pumping System
Authors: Rabiaa Gammoudi, Najet Rebei, Othman Hasnaoui
Abstract:
Photovoltaic systems use different techniques for tracking the Maximum Power Point (MPPT) to provide the highest possible power to the load regardless of the climatic conditions variation. In this paper, the proposed method is the Open Circuit (OC) method with sudden and random variations of insolation. The simulation results of the water pumping system controlled by OC method are validated by an experimental experience in real-time using a test bench composed by a centrifugal pump powered by a PVG via a boost chopper for the adaptation between the source and the load. The output of the DC/DC converter supplies the motor pump LOWARA type, assembly by means of a DC/AC inverter. The control part is provided by a computer incorporating a card DS1104 running environment Matlab/Simulink for visualization and data acquisition. These results show clearly the effectiveness of our control with a very good performance. The results obtained show the usefulness of the developed algorithm in solving the problem of degradation of PVG performance depending on the variation of climatic factors with a very good yield.Keywords: PVWPS (PV Water Pumping System), maximum power point tracking (MPPT), open circuit method (OC), boost converter, DC/AC inverter
Procedia PDF Downloads 45417583 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 41617582 Direct Blind Separation Methods for Convolutive Images Mixtures
Authors: Ahmed Hammed, Wady Naanaa
Abstract:
In this paper, we propose a general approach to deal with the problem of a convolutive mixture of images. We use a direct blind source separation method by adding only one non-statistical justified constraint describing the relationships between different mixing matrix at the aim to make its resolution easy. This method can be applied, provided that this constraint is known, to degraded document affected by the overlapping of text-patterns and images. This is due to chemical and physical reactions of the materials (paper, inks,...) occurring during the documents aging, and other unpredictable causes such as humidity, microorganism infestation, human handling, etc. We will demonstrate that this problem corresponds to a convolutive mixture of images. Subsequently, we will show how the validation of our method through numerical examples. We can so obtain clear images from unreadable ones which can be caused by pages superposition, a phenomenon similar to that we find every often in archival documents.Keywords: blind source separation, convoluted mixture, degraded documents, text-patterns overlapping
Procedia PDF Downloads 32217581 The Effect of Using Universal Design for Learning to Improve the Quality of Vocational Programme with Intellectual Disabilities and the Challenges Facing This Method from the Teachers' Point of View
Authors: Ohud Adnan Saffar
Abstract:
This study aims to know the effect of using universal design for learning (UDL) to improve the quality of vocational programme with intellectual disabilities (SID) and the challenges facing this method from the teachers' point of view. The significance of the study: There are comparatively few published studies on UDL in emerging nations. Therefore, this study will encourage the researchers to consider a new approaches teaching. Development of this study will contribute significant information on the cognitively disabled community on a universal scope. In order to collect and evaluate the data and for the verification of the results, this study has been used the mixed research method, by using two groups comparison method. To answer the study questions, we were used the questionnaire, lists of observations, open questions, and pre and post-test. Thus, the study explored the advantages and drawbacks, and know about the impact of using the UDL method on integrating SID with students non-special education needs in the same classroom. Those aims were realized by developing a workshop to explain the three principles of the UDL and train (16) teachers in how to apply this method to teach (12) students non-special education needs and the (12) SID in the same classroom, then take their opinion by using the questionnaire and questions. Finally, this research will explore the effects of the UDL on the teaching of professional photography skills for the SID in Saudi Arabia. To achieve this goal, the research method was a comparison of the performance of the SID using the UDL method with that of female students with the same challenges applying other strategies by teachers in control and experiment groups, we used the observation lists, pre and post-test. Initial results: It is clear from the previous response to the participants that most of the answers confirmed that the use of UDL achieves the principle of inclusion between the SID and students non-special education needs by 93.8%. In addition, the results show that the majority of the sampled people see that the most important advantages of using UDL in teaching are creating an interactive environment with using new and various teaching methods, with a percentage of 56.2%. Following this result, the UDL is useful for integrating students with general education, with a percentage of 31.2%. Moreover, the finding indicates to improve understanding through using the new technology and exchanging the primitive ways of teaching with the new ones, with a percentage of 25%. The result shows the percentages of the sampled people's opinions about the financial obstacles, and it concluded that the majority see that the cost is high and there is no computer maintenance available, with 50%. There are no smart devices in schools to help in implementing and applying for the program, with a percentage of 43.8%.Keywords: universal design for learning, intellectual disabilities, vocational programme, the challenges facing this method
Procedia PDF Downloads 12917580 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures
Authors: Filippo Ranalli, Forest Flager, Martin Fischer
Abstract:
This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures
Procedia PDF Downloads 34117579 The Formation of Thin Copper Films on Graphite Surface Using Magnetron Sputtering Method
Authors: Zydrunas Kavaliauskas, Aleksandras Iljinas, Liutauras Marcinauskas, Mindaugas Milieska, Vitas Valincius
Abstract:
The magnetron sputtering deposition method is often used to obtain thin film coatings. The main advantage of magnetron vaporization compared to other deposition methods is the high rate erosion of the cathode material (e.g., copper, aluminum, etc.) and the ability to operate under low-pressure conditions. The structure of the formed coatings depends on the working parameters of the magnetron deposition system, which is why it is possible to influence the properties of the growing film, such as morphology, crystal orientation, and dimensions, stresses, adhesion, etc. The properties of these coatings depend on the distance between the substrate and the magnetron surface, the vacuum depth, the gas used, etc. Using this deposition technology, substrates are most often placed near the anode. The magnetic trap of the magnetrons for localization of electrons in the cathode region is formed using a permanent magnet system that is on the side of the cathode. The scientific literature suggests that, after insertion of a small amount of copper into graphite, the electronic conductivity of graphite increase. The aim of this work is to create thin (up to 300 nm) layers on a graphite surface using a magnetron evaporation method, to investigate the formation peculiarities and microstructure of thin films, as well as the mechanism of copper diffusion into graphite inner layers at different thermal treatment temperatures. The electron scanning microscope was used to investigate the microrelief of the coating surface. The chemical composition is determined using the EDS method, which shows that, with an increase of the thermal treatment of the copper-carbon layer from 200 °C to 400 °C, the copper content is reduced from 8 to 4 % in atomic mass units. This is because the EDS method captures only the amount of copper on the graphite surface, while the temperature of the heat treatment increases part of the copper because of the diffusion processes penetrates into the inner layers of the graphite. The XRD method shows that the crystalline copper structure is not affected by thermal treatment.Keywords: carbon, coatings, copper, magnetron sputtering
Procedia PDF Downloads 29017578 Estimation of Slab Depth, Column Size and Rebar Location of Concrete Specimen Using Impact Echo Method
Authors: Y. T. Lee, J. H. Na, S. H. Kim, S. U. Hong
Abstract:
In this study, an experimental research for estimation of slab depth, column size and location of rebar of concrete specimen is conducted using the Impact Echo Method (IE) based on stress wave among non-destructive test methods. Estimation of slab depth had total length of 1800×300 and 6 different depths including 150 mm, 180 mm, 210 mm, 240 mm, 270 mm and 300 mm. The concrete column specimen was manufactured by differentiating the size into 300×300×300 mm, 400×400×400 mm and 500×500×500 mm. In case of the specimen for estimation of rebar, rebar of ∅22 mm was used in a specimen of 300×370×200 and arranged at 130 mm and 150 mm from the top to the rebar top. As a result of error rate of slab depth was overall mean of 3.1%. Error rate of column size was overall mean of 1.7%. Mean error rate of rebar location was 1.72% for top, 1.19% for bottom and 1.5% for overall mean showing relative accuracy.Keywords: impact echo method, estimation, slab depth, column size, rebar location, concrete
Procedia PDF Downloads 35117577 Using Priority Order of Basic Features for Circumscribed Masses Detection in Mammograms
Authors: Minh Dong Le, Viet Dung Nguyen, Do Huu Viet, Nguyen Huu Tu
Abstract:
In this paper, we present a new method for circumscribed masses detection in mammograms. Our method is evaluated on 23 mammographic images of circumscribed masses and 20 normal mammograms from public Mini-MIAS database. The method is quite sanguine with sensitivity (SE) of 95% with only about 1 false positive per image (FPpI). To achieve above results we carry out a progression following: Firstly, the input images are preprocessed with the aim to enhance key information of circumscribed masses; Next, we calculate and evaluate statistically basic features of abnormal regions on training database; Then, mammograms on testing database are divided into equal blocks which calculated corresponding features. Finally, using priority order of basic features to classify blocks as an abnormal or normal regions.Keywords: mammograms, circumscribed masses, evaluated statistically, priority order of basic features
Procedia PDF Downloads 33417576 Optimizing Human Diet Problem Using Linear Programming Approach: A Case Study
Authors: P. Priyanka, S. Shruthi, N. Guruprasad
Abstract:
Health is a common theme in most cultures. In fact all communities have their concepts of health, as part of their culture. Health continues to be a neglected entity. Planning of Human diet should be done very careful by selecting the food items or groups of food items also the composition involved. Low price and good taste of foods are regarded as two major factors for optimal human nutrition. Linear programming techniques have been extensively used for human diet formulation for quiet good number of years. Through the process, we mainly apply “The Simplex Method” which is a very useful statistical tool based on the theorem of Elementary Row Operation from Linear Algebra and also incorporate some other necessary rules set by the Simplex Method to help solve the problem. The study done by us is an attempt to develop a programming model for optimal planning and best use of nutrient ingredients.Keywords: diet formulation, linear programming, nutrient ingredients, optimization, simplex method
Procedia PDF Downloads 55817575 Continuous-Time Convertible Lease Pricing and Firm Value
Authors: Ons Triki, Fathi Abid
Abstract:
Along with the increase in the use of leasing contracts in corporate finance, multiple studies aim to model the credit risk of the lease in order to cover the losses of the lessor of the asset if the lessee goes bankrupt. In the current research paper, a convertible lease contract is elaborated in a continuous time stochastic universe aiming to ensure the financial stability of the firm and quickly recover the losses of the counterparties to the lease in case of default. This work examines the term structure of the lease rates taking into account the credit default risk and the capital structure of the firm. The interaction between the lessee's capital structure and the equilibrium lease rate has been assessed by applying the competitive lease market argument developed by Grenadier (1996) and the endogenous structural default model set forward by Leland and Toft (1996). The cumulative probability of default was calculated by referring to Leland and Toft (1996) and Yildirim and Huan (2006). Additionally, the link between lessee credit risk and lease rate was addressed so as to explore the impact of convertible lease financing on the term structure of the lease rate, the optimal leverage ratio, the cumulative default probability, and the optimal firm value by applying an endogenous conversion threshold. The numerical analysis is suggestive that the duration structure of lease rates increases with the increase in the degree of the market price of risk. The maximal value of the firm decreases with the effect of the optimal leverage ratio. The results are indicative that the cumulative probability of default increases with the maturity of the lease contract if the volatility of the asset service flows is significant. Introducing the convertible lease contract will increase the optimal value of the firm as a function of asset volatility for a high initial service flow level and a conversion ratio close to 1.Keywords: convertible lease contract, lease rate, credit-risk, capital structure, default probability
Procedia PDF Downloads 9817574 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: hough forest, active shape model, segmentation, cardiac left ventricle
Procedia PDF Downloads 33917573 A Study of Flow near the Leading Edge of a Flat Plate by New Idea in Analytical Methods
Authors: M. R. Akbari, S. Akbari, L. Abdollahpour
Abstract:
The present paper is concerned with calculating the 2-dimensional velocity profile of a viscous flow for an incompressible fluid along the leading edge of a flat plate by using the continuity and motion equations with a simple and innovative approach. A Comparison between Numerical method and AGM has been made and the results have been revealed that AGM is very accurate and easy and can be applied for a wide variety of nonlinear problems. It is notable that most of the differential equations can be solved in this approach which in the other approaches they do not have this capability. Moreover, there are some valuable benefits in this method of solving differential equations, for instance: Without any dimensionless procedure, we can solve many differential equation(s), that is, differential equations are directly solvable by this method. In addition, it is not necessary to convert variables into new ones. According to the afore-mentioned expressions which will be proved in this literature, the process of solving nonlinear differential equation(s) will be very simple and convenient in contrast to the other approaches.Keywords: leading edge, new idea, flat plate, incompressible fluid
Procedia PDF Downloads 28717572 SiC Particulate-Reinforced SiC Composites Fabricated by PIP Method Using Highly Concentrated SiC Slurry
Authors: Jian Gu, Sea-Hoon Lee, Jun-Seop Kim
Abstract:
SiC particulate-reinforced SiC ceramic composites (SiCp/SiC) were successfully fabricated using polymer impregnation and pyrolysis (PIP) method. The effects of green density, infiltrated method, pyrolytic temperature, and heating rate on the densification behavior of the composites were investigated. SiCp/SiC particulate reinforced composites with high relative density up to 88.06% were fabricated after 4 PIP cycles using SiC pellets with high green density. The pellets were prepared by drying 62-70 vol.% aqueous SiC slurries, and the maximum relative density of the pellets was 75.5%. The hardness of the as-fabricated SiCp/SiCs was 21.05 GPa after 4 PIP cycles, which value increased to 23.99 GPa after a heat treatment at 2000℃. Excellent mechanical properties, thermal stability, and short processing time render the SiCp/SiC composite as a challenging candidate for the high-temperature application.Keywords: high green density, mechanical property, polymer impregnation and pyrolysis, structural application
Procedia PDF Downloads 13817571 Date Palm Compreg: A High Quality Bio-Composite of Date Palm Wood
Authors: Mojtaba Soltani, Edi Suhaimi Bakar, Hamid Reza Naji
Abstract:
Date Palm Wood (D.P.W) specimens were impregnated with Phenol formaldehyde (PF) resin at 15% level, using vacuum/pressure method. Three levels of moisture content (MC) (50%, 60%, and 70% ) before pressing stage and three hot pressing times (15, 20, and 30 minutes) were the variables. The boards were prepared at 20% compression rate. The physical properties of specimens such as spring back, thickness swelling and water absorption, and mechanical properties including MOR, MOE were studied and compared between variables. The results indicated that the percentage of MC levels before compression set was the main factor on the properties of the Date Palm Compreg. Also, the results showed that this compregnation method can be used as a good method for making high-quality bio-composite from Date Palm Wood.Keywords: Date palm, phenol formaldehyde resin, high-quality bio-composite, physical and mechanical properties
Procedia PDF Downloads 35017570 Reactive Dyed Superhydrophobic Cotton Fabric Production by Sol-Gel Method
Authors: Kuddis Büyükakıllı
Abstract:
The pretreated and bleached mercerized cotton fabric was dyed with reactive Everzol Brilliant Yellow 4GR (C.I. Yellow 160) dyestuff. Superhydrophobicity is provided to white and reactive dyed fabrics by using a nanotechnological sol-gel method with tetraethoxysilane and fluorcarbon water repellent agents by the two-step method. The effect of coating on color yield, fastness and functional properties of fabric was investigated. It was observed that water drop contact angles were higher in colorless coated fabrics compared to colored coated fabrics, there was no significant color change in colored superhydrophobic fabric and high color fastness values. Although there are no significant color losses in the fabrics after multiple washing and dry cleaning processes, water drop contact angles are greatly reduced.Keywords: fluorcarbon water repellent agent, colored cotton fabric, sol-gel, superhydrophobic
Procedia PDF Downloads 11817569 Numerical Solutions of Fredholm Integral Equations by B-Spline Wavelet Method
Authors: Ritu Rani
Abstract:
In this paper, we apply minimalistically upheld linear semi-orthogonal B-spline wavelets, exceptionally developed for the limited interim to rough the obscure function present in the integral equations. Semi-orthogonal wavelets utilizing B-spline uniquely developed for the limited interim and these wavelets can be spoken to in a shut frame. This gives a minimized help. Semi-orthogonal wavelets frame the premise in the space L²(R). Utilizing this premise, an arbitrary function in L²(R) can be communicated as the wavelet arrangement. For the limited interim, the wavelet arrangement cannot be totally introduced by utilizing this premise. This is on the grounds that backings of some premise are truncated at the left or right end purposes of the interim. Subsequently, an uncommon premise must be brought into the wavelet development on the limited interim. These functions are alluded to as the limit scaling functions and limit wavelet functions. B-spline wavelet method has been connected to fathom linear and nonlinear integral equations and their systems. The above method diminishes the integral equations to systems of algebraic equations and afterward these systems can be illuminated by any standard numerical methods. Here, we have connected Newton's method with suitable starting speculation for solving these systems.Keywords: semi-orthogonal, wavelet arrangement, integral equations, wavelet development
Procedia PDF Downloads 17417568 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques
Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa
Abstract:
This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences
Procedia PDF Downloads 34717567 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 2917566 Identification of Nonlinear Systems Using Radial Basis Function Neural Network
Authors: C. Pislaru, A. Shebani
Abstract:
This paper uses the radial basis function neural network (RBFNN) for system identification of nonlinear systems. Five nonlinear systems are used to examine the activity of RBFNN in system modeling of nonlinear systems; the five nonlinear systems are dual tank system, single tank system, DC motor system, and two academic models. The feed forward method is considered in this work for modelling the non-linear dynamic models, where the K-Means clustering algorithm used in this paper to select the centers of radial basis function network, because it is reliable, offers fast convergence and can handle large data sets. The least mean square method is used to adjust the weights to the output layer, and Euclidean distance method used to measure the width of the Gaussian function.Keywords: system identification, nonlinear systems, neural networks, radial basis function, K-means clustering algorithm
Procedia PDF Downloads 47017565 Fault Diagnosis in Induction Motors Using Discrete Wavelet Transform
Authors: K. Yahia, A. Titaouine, A. Ghoggal, S. E. Zouzou, F. Benchabane
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.Keywords: Induction Motors (IMs), inter-turn short-circuits diagnosis, Discrete Wavelet Transform (DWT), Current Park’s Vector Modulus (CPVM)
Procedia PDF Downloads 553