Search results for: method of symmetrical components
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21425

Search results for: method of symmetrical components

21185 Inelastic and Elastic Taping in Plantar Pressure of Runners Pronators: Clinical Trial

Authors: Liana Gomide, Juliana Rodrigues

Abstract:

The morphology of the foot defines its mode of operation and a biomechanical reform indispensable for a symmetrical distribution of plantar pressures in order not to overload some of its components in isolation. High plantar pressures at specific points in the foot may be a causal factor in several orthopedic disorders that affect the feet such as pain and stress fracture. With digital baro-podometry equipment one can observe an intensity of pressures along the entire foot and quantify some of the movements, such as a subtalar pronation present in the midfoot region. Although, they are involved in microtraumas. In clinical practice, excessive movement has been limited with the use of different taping techniques applied on the plantar arch. Thus, the objective of the present study was to analyze and compare the influence of the inelastic and elastic taping on the distribution of plantar pressure of runners pronators. This is a randomized clinical trial and blind-crossover. Twenty (20) male subjects, mean age 33 ± 7 years old, mean body mass of 71 ± 7 kg, mean height of 174 ± 6 cm, were included in the study. A data collection was carried out by a single research through barop-odometry equipment - Tekscan, model F-scan mobile. The tests were performed at three different times. In the first, an initial barop-odometric evaluation was performed, without a bandage application, with edges at a speed of 9.0 km/h. In the second and third moments, the inelastic or elastic taping was applied consecutively, according to the definition defined in the randomization. As results, it was observed that both as inelastic and elastic taping, provided significant reductions in contact pressure and peak pressure values when compared to the moment without a taping. However, an elastic taping was more effective in decreasing contact pressure (no bandage = 714 ± 201, elastic taping = 690 ± 210 and inelastic taping = 716 ± 180) and no peak pressure in the midfoot region (no bandage = 1490 ± 42, elastic taping = 1273 ± 323 and inelastic taping = 1487 ± 437). It is possible to conclude that it is an elastic taping provided by pressure in the middle region, thereby reducing the subtalar pronunciation event during the run.

Keywords: elastic taping, inelastic taping, running, subtalar pronation

Procedia PDF Downloads 126
21184 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 150
21183 Seismic Performance of Nuclear Power Plant Structures Subjected to Korean Earthquakes

Authors: D. D. Nguyen, H. S. Park, S. W. Yang, B. Thusa, Y. M. Kim, T. H. Lee

Abstract:

Currently, the design response spectrum (i.e., Nuclear Regulatory Commission - NRC 1.60 spectrum) with the peak ground acceleration (PGA) 0.3g (for Safe Shutdown Earthquake level) is specified for designing the new nuclear power plant (NPP) structures in Korea. However, the recent earthquakes in the region such as the 2016 Gyeongju and the 2017 Pohang earthquake showed that the possible PGA of ground motions can be larger than 0.3g. Therefore, there is a need to analyze the seismic performance of the existing NPP structures under these earthquakes. An NPP model, APR-1400, which is designed and built in Korea was selected for a case study. The NPP structure is numerically modeled in terms of lumped-mass stick elements using OpenSees framework. The floor acceleration and displacement of components are measured to quantify the responses of components. The numerical results show that the floor spectral accelerations are significantly amplified in the components subjected to Korean earthquakes. A comparison between floor response spectra of Korean earthquakes and the NRC design motion highlights that the seismic design level of NPP components under an earthquake should be thoroughly reconsidered. Additionally, a seismic safety assessment of the equipment and relays attached to main structures is also required.

Keywords: nuclear power plant, floor response spectra, Korean earthquake, NRC spectrum

Procedia PDF Downloads 131
21182 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions

Authors: Ramin Rostamkhani, Thurasamy Ramayah

Abstract:

One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.

Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components

Procedia PDF Downloads 63
21181 Identifying Principle Components Affecting Competitiveness of Thai Automotive Parts Industry

Authors: Thanatip Lerttanaporn, Tuanjai Somboonwiwat, Charoenchai Khompatraporn

Abstract:

The automotive parts industry is one of the vital sectors in Thai economy and now is facing a greater competition from ASEAN Economic Community (AEC). This article identifies important factors that impact the competitiveness of Thai automotive parts industry. There are eight groups of factors with a total of 58 factors. Due to a variety of factors, the Exploratory Factor Analysis and Principle Component Analysis have been applied to classify factors into groups or principle components. The results show that there are 15 groups and four of them are critical, covering 80% of important value. These four critical groups are then used to formulate strategies to improve the competitiveness of the Thai automotive parts industry.

Keywords: factor analysis, Thai automotive parts, principle components, exploratory factor, ASEAN economic community

Procedia PDF Downloads 223
21180 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: construction process, construction technology, decision making, environmental performance, subcontractor

Procedia PDF Downloads 216
21179 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 281
21178 The Cost of Innovation in Software Development Projects

Authors: Mihai Liviu Despa

Abstract:

The paper tackles the topic of determining the cost of innovation in software development projects. Innovation can be achieved either in a planned or unplanned manner. The paper approaches the scenarios were innovation is planned for. As a starting point an innovative software development project is analyzed. The project is depicted step by step as it was implemented, from inception to delivery. Costs that are proprietary to innovation in software development are isolated based on the author’s personal experience in managing the above mentioned project. Innovation costs components identified by the author are then validated using open discussions with software development professionals and projects managers on LinkedIn groups. In order to receive relevant feedback only groups that focus on software development and innovation management are targeted. Additional innovation cost components suggested by software development professionals and projects managers are also considered. Based on the identified cost components an indicator is built. The indicator is meant to formalize the process of determining the cost of innovation in a software development project. The indicator aggregates all the innovation cost components that are identified in the research process. The process of calculating each cost component is also described. Conclusions are formulated and new related research topics are submitted for debate.

Keywords: innovation cost, IT project management, software development, innovation management

Procedia PDF Downloads 423
21177 Tool Wear Analysis in 3D Manufactured Ti6AI4V

Authors: David Downey

Abstract:

With the introduction of additive manufacturing (3D printing) to produce titanium (Ti6Al4V) components in the medical/aerospace and automotive industries, intricate geometries can be produced with virtually complete design freedom. However, the consideration of microstructural anisotropy resulting from the additive manufacturing process becomes necessary due to this design flexibility and the need to print a geometric shape that can consist of numerous angles, radii, and swept surfaces. A femoral knee implant serves as an example of a 3D-printed near-net-shaped product. The mechanical properties of the printed components, and consequently, their machinability, are affected by microstructural anisotropy. Currently, finish-machining operations performed on titanium printed parts using selective laser melting (SLM) utilize the same cutting tools employed for processing wrought titanium components. Cutting forces for components manufactured through SLM can be up to 70% higher than those for their wrought counterparts made of Ti6Al4V. Moreover, temperatures at the cutting interface of 3D printed material can surpass those of wrought titanium, leading to significant tool wear. Although the criteria for tool wear may be similar for both 3D printed and wrought materials, the rate of wear during the machining process may differ. The impact of these issues on the choice of cutting tool material and tool lifetimes will be discussed.

Keywords: additive manufacturing, build orientation, microstructural anisotropy, printed titanium Ti6Al4V, tool wear

Procedia PDF Downloads 66
21176 Lead Free BNT-BKT-BMgT-CoFe₂O₄ Magnetoelectric Nanoparticulate Composite Thin Films Prepared by Chemical Solution Deposition Method

Authors: A. K. Paul, Vinod Kumar

Abstract:

Lead free magnetoelectric (ME) nanoparticulate (1−x) BNT-BKT-BMgT−x CFO (x = 0, 0.1, 0.2, 0.3) composite films were synthesized using chemical solution deposition method. The X-ray diffraction and transmission electron microscope (TEM) reveal that CFO nanoparticles were well distributed in the matrix of BNT-BKT-BMgT. The nanocomposite films exhibit both good magnetic and ferroelectric properties at room temperature (R-T). It is concluded that the modulation in compositions of piezomagnetic/piezoelectric components plays a fundamental role in the magnetoelectric coupling in these nanoparticulate composite films. These ME composites provide a great opportunity as potential lead-free systems for ME devices.

Keywords: lead free multiferroic, nanocomposite, ferroelectric, ferromagnetic and magneto-electric properties

Procedia PDF Downloads 102
21175 Producing Graphical User Interface from Activity Diagrams

Authors: Ebitisam K. Elberkawi, Mohamed M. Elammari

Abstract:

Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through activity diagrams (AD).

Keywords: activity diagram, graphical user interface, GUI components, program

Procedia PDF Downloads 432
21174 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis

Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu

Abstract:

In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.

Keywords: supervised, functional principal component analysis, functional response, functional linear regression

Procedia PDF Downloads 39
21173 Study of Heat Conduction in Multicore Chips

Authors: K. N. Seetharamu, Naveen Teggi, Kiranakumar Dhavalagi, Narayana Kamath

Abstract:

A method of temperature calculations is developed to study the conditions leading to hot spot occurrence on multicore chips. A physical model which has salient features of multicore chips is incorporated for the analysis. The model consists of active and background cell laid out in a checkered pattern, and this pattern repeats itself in each fine grain active cells. The die has three layers i) body ii) buried oxide layer iii) wiring layer, stacked one above the other with heat source placed at the interface between wiring and buried oxide layer. With this model we propose analytical method to calculate the target hotspot temperature, heat flow to top and bottom layers of the die and thermal resistance components at each granularity level, assuming appropriate values of die dimensions and parameters. Finally we attempt to find an easier method for the calculation of the target hotspot temperature using graph.

Keywords: checkered pattern, granularity level, heat conduction, multicore chips, target hotspot temperature

Procedia PDF Downloads 436
21172 Wind Turbine Control Performance Evaluation Based on Minimum-Variance Principles

Authors: Zheming Cao

Abstract:

Control loops are the most important components in the wind turbine system. Product quality, operation safety, and the economic performance are directly or indirectly connected to the performance of control systems. This paper proposed a performance evaluation method based on minimum-variance for wind turbine control system. This method can be applied on PID controller for pitch control system in the wind turbine. The good performance result demonstrated in the paper was achieved by retuning and optimizing the controller settings based on the evaluation result. The concepts presented in this paper are illustrated with the actual data of the industrial wind farm.

Keywords: control performance, evaluation, minimum-variance, wind turbine

Procedia PDF Downloads 335
21171 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components

Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura

Abstract:

This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.

Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding

Procedia PDF Downloads 115
21170 Numerical Modeling on the Vehicle Interior Noise Produced by Rain-the-Roof Excitation

Authors: Zilong Peng, Jun Fan

Abstract:

With the improvement of the living standards, the requirement on the acoustic comfort of the vehicle interior environment is becoming higher. The rain-the-roof producing interior noise is a common phenomenon for the vehicle, which usually discourages the conversation, especially for the heavy rain. This paper presents some numerical results about the rain-the-roof noise. The impact of each water drop is modeled as a short pulse, and the excitation locations on the roof are generated randomly. The vehicle body is simplified to a box closed with some certain-thickness shells. According to the main frequency components of the rain excitation, the analyzing frequency range is divided as low, high and middle frequency domains, which makes the vehicle body are modeled using finite element method (FEM), statistical energy analysis (SEA) and hybrid FE-SEA method, respectively. Furthermore, the effect of spatial distribution density and size of the rain on the sound pressure level are also discussed. These results may provide a guide for designing a more silent vehicle in the special weather.

Keywords: rain-the-roof noise, vehicle, finite element method, statistical energy analysis

Procedia PDF Downloads 159
21169 Principle Components Updates via Matrix Perturbations

Authors: Aiman Elragig, Hanan Dreiwi, Dung Ly, Idriss Elmabrook

Abstract:

This paper highlights a new approach to look at online principle components analysis (OPCA). Given a data matrix X R,^m x n we characterise the online updates of its covariance as a matrix perturbation problem. Up to the principle components, it turns out that online updates of the batch PCA can be captured by symmetric matrix perturbation of the batch covariance matrix. We have shown that as n→ n0 >> 1, the batch covariance and its update become almost similar. Finally, utilize our new setup of online updates to find a bound on the angle distance of the principle components of X and its update.

Keywords: online data updates, covariance matrix, online principle component analysis, matrix perturbation

Procedia PDF Downloads 167
21168 Packaging in the Design Synthesis of Novel Aircraft Configuration

Authors: Paul Okonkwo, Howard Smith

Abstract:

A study to estimate the size of the cabin and major aircraft components as well as detect and avoid interference between internally placed components and the external surface, during the conceptual design synthesis and optimisation to explore the design space of a BWB, was conducted. Sizing of components follows the Bradley cabin sizing and rubber engine scaling procedures to size the cabin and engine respectively. The interference detection and avoidance algorithm relies on the ability of the Class Shape Transform parameterisation technique to generate polynomial functions of the surfaces of a BWB aircraft configuration from the sizes of the cabin and internal objects using few variables. Interference detection is essential in packaging of non-conventional configuration like the BWB because of the non-uniform airfoil-shaped sections and resultant varying internal space. The unique configuration increases the need for a methodology to prevent objects from being placed in locations that do not sufficiently enclose them within the geometry.

Keywords: packaging, optimisation, BWB, parameterisation, aircraft conceptual design

Procedia PDF Downloads 440
21167 Effective Editable Emoticon Description Schema for Mobile Applications

Authors: Jiwon Lee, Si-hwan Jang, Sanghyun Joo

Abstract:

The popularity of emoticons are on the rise since the mobile messengers are generalized. At the same time, few problems of emoticons are also occurred due to innate characteristics of emoticons. Too many emoticons make difficult people to select one which is well-suited for user's intention. On the contrary to this, sometimes user cannot find the emoticon which expresses user's exact intention. Poor information delivery of emoticon is another problem due to a major part of current emoticons are focused on emotion delivery. In this situation, we propose a new concept of emoticons, editable emoticons, to solve above drawbacks of emoticons. User can edit the components inside the proposed editable emoticon and send it to express his exact intention. By doing so, the number of editable emoticons can be maintained reasonable, and it can express user's exact intention. Further, editable emoticons can be used as information deliverer according to user's intention and editing skills. In this paper, we propose the concept of editable emoticons and schema based editable emoticon description method. The proposed description method is 200 times superior to the compared screen capturing method in the view of transmission bandwidth. Further, the description method is designed to have compatibility since it follows MPEG-UD international standard. The proposed editable emoticons can be exploited not only mobile applications, but also various fields such as education and medical field.

Keywords: description schema, editable emoticon, emoticon transmission, mobile applications

Procedia PDF Downloads 269
21166 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset

Procedia PDF Downloads 320
21165 Effect of Particle Size on Sintering Characteristics of Injection Molded 316L Powder

Authors: H. Özkan Gülsoy, Antonyraj Arockiasamy

Abstract:

The application of powder injection molding technology for the fabrication of metallic and non-metallic components is of growing interest as the process considerably saves time and cost. Utilizing this fabrication method, full dense components are being prepared in various sizes. In this work, our effort is focused to study the densification behavior of the parts made using different size 316L stainless steel powders. The metal powders were admixed with an adequate amount of polymeric compounds and molded as standard tensile bars. Solvent and thermal debinding was carried out followed by sintering in ultra pure hydrogen atmosphere based on the differential scanning calorimetry (DSC) cycle. Mechanical property evaluation and microstructural characterization of the sintered specimens was performed using universal Instron tensile testing machine, Vicker’s microhardness tester, optical (OM) and scanning electron microscope (SEM), energy dispersive spectroscopy (EDS), and X-ray diffraction were used. The results are compared and analyzed to predict the strength and weakness of the test conditions.

Keywords: powder injection molding, sintering, particle size, stainless steels

Procedia PDF Downloads 335
21164 Friction and Wear Behavior of Zr-Nb Alloy Under Different Conditions

Authors: Bharat Kumar, Deepak Kumar, Vijay Chaudhry

Abstract:

Zirconium alloys are generally used for designing the core components of nuclear reactors due to their good mechanical and tribological properties. Some core components are subjected to flow-induced vibrations resulting in wear of these components due to their interaction with one another. To simulate these conditions, low amplitude reciprocating wear tests are conducted at room temperature and high temperature (260 degrees Celsius) between Zr-2.5Nb alloy and SS-410. The tests are conducted at a frequency range of 5 Hz to 25 Hz and an amplitude range of 200 µm to 600 µm. Friction and wear responses were recorded and correlated with the change in parameters. Worn surfaces are analysed using scanning electron microscopy (SEM) and optical profilometer. Elemental changes on the worn surfaces were determined using energy dispersive spectroscopy (EDS). The coefficient of friction (COF) increases with increasing temperature and decreases with increasing frequency. Adhesive wear is found to be the dominant wear mechanism which increases at high temperature.

Keywords: nuclear reactor, Zr-2.5Nb, SS-410, friction and wear

Procedia PDF Downloads 45
21163 Modeling of Large Elasto-Plastic Deformations by the Coupled FE-EFGM

Authors: Azher Jameel, Ghulam Ashraf Harmain

Abstract:

In the recent years, the enriched techniques like the extended finite element method, the element free Galerkin method, and the Coupled finite element-element free Galerkin method have found wide application in modeling different types of discontinuities produced by cracks, contact surfaces, and bi-material interfaces. The extended finite element method faces severe mesh distortion issues while modeling large deformation problems. The element free Galerkin method does not have mesh distortion issues, but it is computationally more demanding than the finite element method. The coupled FE-EFGM proves to be an efficient numerical tool for modeling large deformation problems as it exploits the advantages of both FEM and EFGM. The present paper employs the coupled FE-EFGM to model large elastoplastic deformations in bi-material engineering components. The large deformation occurring in the domain has been modeled by using the total Lagrangian approach. The non-linear elastoplastic behavior of the material has been represented by the Ramberg-Osgood model. The elastic predictor-plastic corrector algorithms are used for the evaluation stresses during large deformation. Finally, several numerical problems are solved by the coupled FE-EFGM to illustrate its applicability, efficiency and accuracy in modeling large elastoplastic deformations in bi-material samples. The results obtained by the proposed technique are compared with the results obtained by XFEM and EFGM. A remarkable agreement was observed between the results obtained by the three techniques.

Keywords: XFEM, EFGM, coupled FE-EFGM, level sets, large deformation

Procedia PDF Downloads 415
21162 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process

Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari

Abstract:

Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.

Keywords: UML, component, fragment, agile, SPL

Procedia PDF Downloads 366
21161 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 301
21160 Research on the Calculation Method of Smartization Rate of Concrete Structure Building Construction

Authors: Hongyu Ye, Hong Zhang, Minjie Sun, Hongfang Xu

Abstract:

In the context of China's promotion of smart construction and building industrialization, there is a need for evaluation standards for the development of building industrialization based on assembly-type construction. However, the evaluation of smart construction remains a challenge in the industry's development process. This paper addresses this issue by proposing a calculation and evaluation method for the smartization rate of concrete structure building construction. The study focuses on examining the factors of smart equipment application and their impact on costs throughout the process of smart construction design, production, transfer, and construction. Based on this analysis, the paper presents an evaluation method for the smartization rate based on components. Furthermore, it introduces calculation methods for assessing the smartization rate of buildings. The paper also suggests a rapid calculation method for determining the smartization rate using Building Information Modeling (BIM) and information expression technology. The proposed research provides a foundation for the swift calculation of the smartization rate based on BIM and information technology. Ultimately, it aims to promote the development of smart construction and the construction of high-quality buildings in China.

Keywords: building industrialization, high quality building, smart construction, smartization rate, component

Procedia PDF Downloads 32
21159 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work

Authors: Fawaz A. Binsarra, Halim Boussabaine

Abstract:

The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.

Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis

Procedia PDF Downloads 490
21158 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: finite element method, fluence distribution, Monte Carlo method, photoacoustic imaging

Procedia PDF Downloads 353
21157 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 181
21156 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 95