Search results for: atomic-scale finite element method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21069

Search results for: atomic-scale finite element method

17559 The Dao of Political Economy - A Holistic Perspective

Authors: Tao Peng

Abstract:

This paper presents a holistic model of political economy based on Daoism – the foundational philosophy of classical Chinese epistemology. Daoism is both comprehensive and subtle in its manifestations and applications in all aspects of nature and society. Based on Daoist creation theory of the universe, life theory and five element functioning theory, a holistic model in economics with minimal assumptions and independent of ideology are constructed. Under this framework, different schools of economics, such as neo-liberal, Marxism, and Austrian school, are explored and shed new light on. Economic and financial predictions can be realized in applications to Qi Men Dun Jia. This framework can provide guidelines and inspirations to economic modelling, economic policies formulation and strategy development and guide society towards a more sustainable future.

Keywords: daoism, economics, holistic, philosophy

Procedia PDF Downloads 86
17558 Palliative Care and Persons with Intellectual Disabilities

Authors: Miriam Colleran, Barbara Sheehy-Skeffington

Abstract:

Background: To explore if there are unique features in the palliative care needs of patients with intellectual disability that may impact on planning for resource and service provision for them. Aim: The purpose of this practice review is to assess the indications for, numbers of and outcomes of care for adults with intellectual disabilities referred to a specialist palliative care service over a twoyear period. Service utilization aspects considered included the frequency of home visits by a specialist palliative care doctor or clinical nurse specialist and the number of hospice admissions that occurred for the patients. Method: A retrospective review was carried out of persons 18 years and older with intellectual disabilities referred to a specialist palliative care service over a 5-year period from 30.11.3018 to 29.11.2023. A manual review was carried out of the register using key terms, namely, known residential care and community dwelling places of service providers for persons with intellectual disabilities in the area and registered diagnoses in addition to the patients known to the clinicians who had intellectual disabilities. Results: 25 referrals were made to the specialist palliative care service of 23 persons with intellectual disabilities during that time. However, this may be an underestimate. 15 women and 8 men were referred with an age range of 19 to 86 years of age. The majority had a diagnosis of Down’s syndrome or Trisomy 21. 5 patients referred did not have home visits from the specialist palliative care team. A range of 2 to 48 phone calls per person occurred by the specialist palliative care team regarding this cohort of patients. The outcomes for the patients included discharge and death. The majority of patients that died, did so in the community. One person however died in hospital. Another person died in a hospice out of area. Conclusion: Providing specialist palliative care for adults with intellectual disabilities is an important element of palliative care. The dominance of the community as the place of death for these patients and the limited number of patients dying in either hospice or hospital are noteworthy. Further research is necessary and education to inform, support, and empower specialist palliative care professionals in optimizing palliative and end-of-life care for persons with intellectual disabilities and to inform service development and provision.

Keywords: about intellectual disability, palliative care

Procedia PDF Downloads 70
17557 Wearable Music: Generation of Costumes from Music and Generative Art and Wearing Them by 3-Way Projectors

Authors: Noriki Amano

Abstract:

The final goal of this study is to create another way in which people enjoy music through the performance of 'Wearable Music'. Concretely speaking, we generate colorful costumes in real- time from music and to realize their dressing by projecting them to a person. For this purpose, we propose three methods in this study. First, a method of giving color to music in a three-dimensionally way. Second, a method of generating images of costumes from music. Third, a method of wearing the images of music. In particular, this study stands out from other related work in that we generate images of unique costumes from music and realize to wear them. In this study, we use the technique of generative arts to generate images of unique costumes and project the images to the fog generated around a person from 3-way using projectors. From this study, we can get how to enjoy music as 'wearable'. Furthermore, we are also able to have the prospect of unconventional entertainment based on the fusion between music and costumes.

Keywords: entertainment computing, costumes, music, generative programming

Procedia PDF Downloads 173
17556 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 187
17555 Bright, Dark N-Soliton Solution of Fokas-Lenells Equation Using Hirota Bilinearization Method

Authors: Sagardeep Talukdar, Riki Dutta, Gautam Kumar Saharia, Sudipta Nandy

Abstract:

In non-linear optics, the Fokas-Lenells equation (FLE) is a well-known integrable equation that describes how ultrashort pulses move across the optical fiber. It admits localized wave solutions, just like any other integrable equation. We apply the Hirota bilinearization method to obtain the soliton solution of FLE. The proposed bilinearization makes use of an auxiliary function. We apply the method to FLE with a vanishing boundary condition, that is, to obtain a bright soliton solution. We have obtained bright 1-soliton and 2-soliton solutions and propose a scheme for obtaining an N-soliton solution. We have used an additional parameter that is responsible for the shift in the position of the soliton. Further analysis of the 2-soliton solution is done by asymptotic analysis. In the non-vanishing boundary condition, we obtain the dark 1-soliton solution. We discover that the suggested bilinearization approach, which makes use of the auxiliary function, greatly simplifies the process while still producing the desired outcome. We think that the current analysis will be helpful in understanding how FLE is used in nonlinear optics and other areas of physics.

Keywords: asymptotic analysis, fokas-lenells equation, hirota bilinearization method, soliton

Procedia PDF Downloads 112
17554 Risk Management in Industrial Supervision Projects

Authors: Érick Aragão Ribeiro, George André Pereira Thé, José Marques Soares

Abstract:

Several problems in industrial supervision software development projects may lead to the delay or cancellation of projects. These problems can be avoided or contained by using identification methods, analysis and control of risks. These procedures can give an overview of the possible problems that can happen in the projects and what are the immediate solutions. Therefore, we propose a risk management method applied to the teaching and development of industrial supervision software. The method is developed through a literature review and previous projects can be divided into phases of management and have basic features that are validated with experimental research carried out by mechatronics engineering students and professionals. The management is conducted through the stages of identification, analysis, planning, monitoring, control and communication of risks. Programmers use a method of prioritizing risks considering the gravity and the possibility of occurrence of the risk. The outputs of the method indicate which risks occurred or are about to happen. The first results indicate which risks occur at different stages of the project and what risks have a high probability of occurring. The results show the efficiency of the proposed method compared to other methods, showing the improvement of software quality and leading developers in their decisions. This new way of developing supervision software helps students identify design problems, evaluate software developed and propose effective solutions. We conclude that the risk management optimizes the development of the industrial process control software and provides higher quality to the product.

Keywords: supervision software, risk management, industrial supervision, project management

Procedia PDF Downloads 356
17553 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization

Procedia PDF Downloads 301
17552 Numerical Calculation of Dynamic Response of Catamaran Vessels Based on 3D Green Function Method

Authors: Md. Moinul Islam, N. M. Golam Zakaria

Abstract:

Seakeeping analysis of catamaran vessels in the earlier stages of design has become an important issue as it dictates the seakeeping characteristics, and it ensures safe navigation during the voyage. In the present paper, a 3D numerical method for the seakeeping prediction of catamaran vessel is presented using the 3D Green Function method. Both steady and unsteady potential flow problem is dealt with here. Using 3D linearized potential theory, the dynamic wave loads and the subsequent response of the vessel is computed. For validation of the numerical procedure catamaran vessel composed of twin, Wigley form demi-hull is used. The results of the present calculation are compared with the available experimental data and also with other calculations. The numerical procedure is also carried out for NPL-based round bilge catamaran, and hydrodynamic coefficients along with heave and pitch motion responses are presented for various Froude number. The results obtained by the present numerical method are found to be in fairly good agreement with the available data. This can be used as a design tool for predicting the seakeeping behavior of catamaran ships in waves.

Keywords: catamaran, hydrodynamic coefficients , motion response, 3D green function

Procedia PDF Downloads 221
17551 New Approach to Construct Phylogenetic Tree

Authors: Ouafae Baida, Najma Hamzaoui, Maha Akbib, Abdelfettah Sedqui, Abdelouahid Lyhyaoui

Abstract:

Numerous scientific works present various methods to analyze the data for several domains, specially the comparison of classifications. In our recent work, we presented a new approach to help the user choose the best classification method from the results obtained by every method, by basing itself on the distances between the trees of classification. The result of our approach was in the form of a dendrogram contains methods as a succession of connections. This approach is much needed in phylogeny analysis. This discipline is intended to analyze the sequences of biological macro molecules for information on the evolutionary history of living beings, including their relationship. The product of phylogeny analysis is a phylogenetic tree. In this paper, we recommend the use of a new method of construction the phylogenetic tree based on comparison of different classifications obtained by different molecular genes.

Keywords: hierarchical classification, classification methods, structure of tree, genes, phylogenetic analysis

Procedia PDF Downloads 511
17550 A Sectional Control Method to Decrease the Accumulated Survey Error of Tunnel Installation Control Network

Authors: Yinggang Guo, Zongchun Li

Abstract:

In order to decrease the accumulated survey error of tunnel installation control network of particle accelerator, a sectional control method is proposed. Firstly, the accumulation rule of positional error with the length of the control network is obtained by simulation calculation according to the shape of the tunnel installation-control-network. Then, the RMS of horizontal positional precision of tunnel backbone control network is taken as the threshold. When the accumulated error is bigger than the threshold, the tunnel installation control network should be divided into subsections reasonably. On each segment, the middle survey station is taken as the datum for independent adjustment calculation. Finally, by taking the backbone control points as faint datums, the weighted partial parameters adjustment is performed with the adjustment results of each segment and the coordinates of backbone control points. The subsections are jointed and unified into the global coordinate system in the adjustment process. An installation control network of the linac with a length of 1.6 km is simulated. The RMS of positional deviation of the proposed method is 2.583 mm, and the RMS of the difference of positional deviation between adjacent points reaches 0.035 mm. Experimental results show that the proposed sectional control method can not only effectively decrease the accumulated survey error but also guarantee the relative positional precision of the installation control network. So it can be applied in the data processing of tunnel installation control networks, especially for large particle accelerators.

Keywords: alignment, tunnel installation control network, accumulated survey error, sectional control method, datum

Procedia PDF Downloads 191
17549 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement

Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad

Abstract:

An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.

Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter

Procedia PDF Downloads 399
17548 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 293
17547 Multi-Response Optimization of CNC Milling Parameters Using Taguchi Based Grey Relational Analysis for AA6061 T6 Aluminium Alloy

Authors: Varsha Singh, Kishan Fuse

Abstract:

This paper presents a study of the grey-Taguchi method to optimize CNC milling parameters of AA6061 T6 aluminium alloy. Grey-Taguchi method combines Taguchi method based design of experiments (DOE) with grey relational analysis (GRA). Multi-response optimization of different quality characteristics as surface roughness, material removal rate, cutting forces is done using grey relational analysis (GRA). The milling parameters considered for experiments include cutting speed, feed per tooth, and depth of cut. Each parameter with three levels is selected. A grey relational grade is used to estimate overall quality characteristics performance. The Taguchi’s L9 orthogonal array is used for design of experiments. MINITAB 17 software is used for optimization. Analysis of variance (ANOVA) is used to identify most influencing parameter. The experimental results show that grey relational analysis is effective method for optimizing multi-response characteristics. Optimum results are finally validated by performing confirmation test.

Keywords: ANOVA, CNC milling, grey relational analysis, multi-response optimization

Procedia PDF Downloads 308
17546 A Review of Fractal Dimension Computing Methods Applied to Wear Particles

Authors: Manish Kumar Thakur, Subrata Kumar Ghosh

Abstract:

Various types of particles found in lubricant may be characterized by their fractal dimension. Some of the available methods are: yard-stick method or structured walk method, box-counting method. This paper presents a review of the developments and progress in fractal dimension computing methods as applied to characteristics the surface of wear particles. An overview of these methods, their implementation, their advantages and their limits is also present here. It has been accepted that wear particles contain major information about wear and friction of materials. Morphological analysis of wear particles from a lubricant is a very effective way for machine condition monitoring. Fractal dimension methods are used to characterize the morphology of the found particles. It is very useful in the analysis of complexity of irregular substance. The aim of this review is to bring together the fractal methods applicable for wear particles.

Keywords: fractal dimension, morphological analysis, wear, wear particles

Procedia PDF Downloads 490
17545 Usability in E-Commerce Websites: Results of Eye Tracking Evaluations

Authors: Beste Kaysı, Yasemin Topaloğlu

Abstract:

Usability is one of the most important quality attributes for web-based information systems. Specifically, for e-commerce applications, usability becomes more prominent. In this study, we aimed to explore the features that experienced users seek in e-commerce applications. We used eye tracking method in evaluations. Eye movement data are obtained from the eye-tracking method and analyzed based on task completion time, number of fixations, as well as heat map and gaze plot measures. The results of the analysis show that the eye movements of participants' are too static in certain areas and their areas of interest are scattered in many different places. It has been determined that this causes users to fail to complete their transactions. According to the findings, we outlined the issues to improve the usability of e-commerce websites. Then we propose solutions to identify the issues. In this way, it is expected that e-commerce sites will be developed which will make experienced users more satisfied.

Keywords: e-commerce websites, eye tracking method, usability, website evaluations

Procedia PDF Downloads 182
17544 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products

Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo

Abstract:

The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.

Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk

Procedia PDF Downloads 137
17543 A Theoretical Study of Accelerating Neutrons in LINAC Using Magnetic Gradient Method

Authors: Chunduru Amareswara Prasad

Abstract:

The main aim of this proposal it to reveal the secrets of the universe by accelerating neutrons. The proposal idea in its abridged version speaks about the possibility of making neutrons accelerate with help of thermal energy and magnetic energy under controlled conditions. Which is helpful in revealing the hidden secrets of the universe namely dark energy and in finding properties of Higgs boson. The paper mainly speaks about accelerating neutrons to near velocity of light in a LINAC, using magnetic energy by magnetic pressurizers. The center of mass energy of two colliding neutron beams is 94 GeV (~0.5c) can be achieved using this method. The conventional ways to accelerate neutrons has some constraints in accelerating them electromagnetically as they need to be separated from the Tritium or Deuterium nuclei. This magnetic gradient method provides efficient and simple way to accelerate neutrons.

Keywords: neutron, acceleration, thermal energy, magnetic energy, Higgs boson

Procedia PDF Downloads 326
17542 Improving Forecasting Demand for Maintenance Spare Parts: Case Study

Authors: Abdulaziz Afandi

Abstract:

Minimizing the inventory cost, optimizing the inventory quantities, and increasing system operational availability are the main motivations to enhance forecasting demand of spare parts in a major power utility company in Medina. This paper reports in an effort made to optimize the orders quantities of spare parts by improving the method of forecasting the demand. The study focuses on equipment that has frequent spare parts purchase orders with uncertain demand. The pattern of the demand considers a lumpy pattern which makes conventional forecasting methods less effective. A comparison was made by benchmarking various methods of forecasting based on experts’ criteria to select the most suitable method for the case study. Three actual data sets were used to make the forecast in this case study. Two neural networks (NN) approaches were utilized and compared, namely long short-term memory (LSTM) and multilayer perceptron (MLP). The results as expected, showed that the NN models gave better results than traditional forecasting method (judgmental method). In addition, the LSTM model had a higher predictive accuracy than the MLP model.

Keywords: neural network, LSTM, MLP, forecasting demand, inventory management

Procedia PDF Downloads 127
17541 Circuit Models for Conducted Susceptibility Analyses of Multiconductor Shielded Cables

Authors: Saih Mohamed, Rouijaa Hicham, Ghammaz Abdelilah

Abstract:

This paper presents circuit models to analyze the conducted susceptibility of multiconductor shielded cables in frequency domains using Branin’s method, which is referred to as the method of characteristics. These models, Which can be used directly in the time and frequency domains, take into account the presence of both the transfer impedance and admittance. The conducted susceptibility is studied by using an injection current on the cable shield as the source. Two examples are studied, a coaxial shielded cable and shielded cables with two parallel wires (i.e., twinax cables). This shield has an asymmetry (one slot on the side). Results obtained by these models are in good agreement with those obtained by other methods.

Keywords: circuit models, multiconductor shielded cables, Branin’s method, coaxial shielded cable, twinax cables

Procedia PDF Downloads 516
17540 Investigation of Supercapacitor Properties of Nanocomposites Obtained from Acid and Base-functionalized Multi-walled Carbon Nanotube (MWCNT) and Polypyrrole (PPy)

Authors: Feridun Demir, Pelin Okdem

Abstract:

Polymers are versatile materials with many unique properties, such as low density, reasonable strength, flexibility, and easy processability. However, the mechanical properties of these materials are insufficient for many engineering applications. Therefore, there is a continuous search for new polymeric materials with improved properties. Polymeric nanocomposites are an advanced class of composite materials that have attracted great attention in both academic and industrial fields. Since nano-reinforcement materials are very small in size, they provide ultra-large interfacial area per volume between the nano-element and the polymer matrix. This allows the nano-reinforcement composites to exhibit enhanced toughness without compromising hardness or optical clarity. PPy and MWCNT/PPy nanocomposites were synthesized by the chemical oxidative polymerization method and the supercapacitor properties of the obtained nanocomposites were investigated. In addition, pure MWCNT was functionalized with acid (H₂SO₄/H₂O₂) and base (NH₄OH/H₂O₂) solutions at a ratio of 3:1 and a-MWCNT/d-PPy, and b-MWCNT/d-PPy nanocomposites were obtained. The homogeneous distribution of MWCNTs in the polypyrrole matrix and shell-core type morphological structures of the nanocomposites was observed with SEM images. It was observed with SEM, FTIR and XRD analyses that the functional groups formed by the functionalization of MWCNTs caused the MWCNTs to come together and partially agglomerate. It was found that the conductivity of the nanocomposites consisting of MWCNT and d-PPy was higher than that of pure d-PPy. CV, GCD and EIS results show that the use of a-MWCNT and b-MWCNTs in nanocomposites with low particle content positively affects the supercapacitor properties of the materials but negatively at high particle content. It was revealed that the functional MWCNT particles combined in nanocomposites with high particle content cause a decrease in the conductivity and distribution of ions in the electrodes and, thus, a decrease in their energy storage capacity.

Keywords: polypyrrole, multi-walled carbon nanotube (MWCNT), conducting polymer, chemical oxidative polymerization, nanocomposite, supercapacitor

Procedia PDF Downloads 21
17539 A Computational Study of the Electron Transport in HgCdTe Bulk Semiconductor

Authors: N. Dahbi, M. Daoudi

Abstract:

This paper deals with the use of computational method based on Monte Carlo simulation in order to investigate the transport phenomena of the electron in HgCdTe narrow band gap semiconductor. Via this method we can evaluate the time dependence of the transport parameters: velocity, energy and mobility of electrons through matter (HgCdTe).

Keywords: Monte Carlo, transport parameters, HgCdTe, computational mechanics

Procedia PDF Downloads 475
17538 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 194
17537 Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies

Authors: Maxat Bokambayev, Bella Tussupova, Aisha Mamyrova, Erlan Izbasarov

Abstract:

Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.

Keywords: business processes, discrete-event simulation, management, trading industry

Procedia PDF Downloads 344
17536 Two Steady States and Two Movement Patterns under the Balanced Budget Rule: An Economy with Divisible Labor

Authors: Fujio Takata

Abstract:

When governments levy taxes on labor income on the basis of a balanced budget rule, two steady states in an economy exist, of which one can cause two movement patterns, namely, indeterminacy paths and a saddle path. However, in this paper, we assume an economy with divisible labor, in which labor adjustment is made by an intensive margin. We demonstrate that there indeed exist the two paths in the economy and that there exists a critical condition dividing them. This is proved by establishing the relationship between a finite elasticity of labor with regard to real wages and the share of capital in output. Consequently, we deduce the existence of an upper limit in the share of capital in output for indeterminacy to occur. The largest possible value of that share is less than 0.5698.

Keywords: balanced budget rule, divisible labor, labor income taxation, two movement patterns

Procedia PDF Downloads 163
17535 Preparation of Chromium Nanoparticles on Carbon Substrate from Tannery Waste Solution by Chemical Method Compared to Electrokinetic Process

Authors: Mahmoud A. Rabah, Said El Sheikh

Abstract:

This work shows the preparation of chromium nanoparticles from tannery waste solution on glassy carbon by chemical method compared to electrokinetic process. The waste solution contains free and soluble fats, calcium, iron, magnesium and high sodium in addition to the chromium ions. Filtration helps removal of insoluble matters. Diethyl ether successfully extracted soluble fats. The method started by removing calcium as insoluble oxalate salts at hot conditions in a faint acidic medium. The filtrate contains iron, magnesium, chromium ions and sodium chloride in excess. Chromium was separated selectively as insoluble hydroxide sol-gel at pH 6.5, filtered and washed with distilled water. Part of the gel reacted with sulfuric acid to produce chromium sulfate solution having 15-25 g/L concentration. Electrokinetic deposition of chromium nanoparticles on a carbon cathode was carried out using platinum anode under different galvanostatic conditions. The chemical method involved impregnating the carbon specimens with chromium hydroxide gel followed by reduction using hydrazine hydrate or by thermal reduction using hydrogen gas at 1250°C. Chromium grain size was characterized by TEM, FT-IR and SEM. Properties of the Cr grains were correlated to the conditions of the preparation process. Electrodeposition was found to control chromium particles to be more identical in size and shape as compared to the chemical method.

Keywords: chromium, electrodeposition, nanoparticles, tannery waste solution

Procedia PDF Downloads 409
17534 District Selection for Geotechnical Settlement Suitability Using GIS and Multi Criteria Decision Analysis: A Case Study in Denizli, Turkey

Authors: Erdal Akyol, Mutlu Alkan

Abstract:

Multi criteria decision analysis (MDCA) covers both data and experience. It is very common to solve the problems with many parameters and uncertainties. GIS supported solutions improve and speed up the decision process. Weighted grading as a MDCA method is employed for solving the geotechnical problems. In this study, geotechnical parameters namely soil type; SPT (N) blow number, shear wave velocity (Vs) and depth of underground water level (DUWL) have been engaged in MDCA and GIS. In terms of geotechnical aspects, the settlement suitability of the municipal area was analyzed by the method. MDCA results were compatible with the geotechnical observations and experience. The method can be employed in geotechnical oriented microzoning studies if the criteria are well evaluated.

Keywords: GIS, spatial analysis, multi criteria decision analysis, geotechnics

Procedia PDF Downloads 459
17533 Real-Time Measurement Approach for Tracking the ΔV10 Estimate Value of DC EAF

Authors: Jin-Lung Guan, Jyh-Cherng Gu, Chun-Wei Huang, Hsin-Hung Chang

Abstract:

This investigation develops a revisable method for estimating the estimate value of equivalent 10 Hz voltage flicker (DV10) of a DC Electric Arc Furnace (EAF). This study also discusses three 161kV DC EAFs by field measurement, with those results indicating that the estimated DV10 value is significantly smaller than the survey value. The key point is that the conventional means of estimating DV10 is inappropriate. There is a main cause as the assumed Qmax is too small. Although DC EAF is regularly operated in a constant MVA mode, the reactive power variation in the Main Transformer (MT) is more significant than that in the Furnace Transformer (FT). A substantial difference exists between estimated maximum reactive power fluctuation (DQmax) and the survey value from actual DC EAF operations. However, this study proposes a revisable method that can obtain a more accurate DV10 estimate than the conventional method.

Keywords: voltage flicker, dc EAF, estimate value, DV10

Procedia PDF Downloads 449
17532 Investigating the Form of the Generalised Equations of Motion of the N-Bob Pendulum and Computing Their Solution Using MATLAB

Authors: Divij Gupta

Abstract:

Pendular systems have a range of both mathematical and engineering applications, ranging from modelling the behaviour of a continuous mass-density rope to utilisation as Tuned Mass Dampers (TMD). Thus, it is of interest to study the differential equations governing the motion of such systems. Here we attempt to generalise these equations of motion for the plane compound pendulum with a finite number of N point masses. A Lagrangian approach is taken, and we attempt to find the generalised form for the Euler-Lagrange equations of motion for the i-th bob of the N -bob pendulum. The co-ordinates are parameterized as angular quantities to reduce the number of degrees of freedom from 2N to N to simplify the form of the equations. We analyse the form of these equations up to N = 4 to determine the general form of the equation. We also develop a MATLAB program to compute a solution to the system for a given input value of N and a given set of initial conditions.

Keywords: classical mechanics, differential equation, lagrangian analysis, pendulum

Procedia PDF Downloads 208
17531 Investigation of Flexural – Torsion Instability of Struts Using Modified Newmark Method

Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi

Abstract:

Differential equations are of fundamental importance in engineering and applied mathematics, since many physical laws and relations appear mathematically in the form of such equations. The equilibrium state of structures consisting of one-dimensional elements can be described by an ordinary differential equation. The response of these kinds of structures under the loading, namely relationship between the displacement field and loading field, can be predicted by the solution of these differential equations and on satisfying the given boundary conditions. When the effect of change of geometry under loading is taken into account in modeling of equilibrium state, then these differential equations are partially integrable in quartered. They also exhibit instability characteristics when the structures are loaded compressively. The purpose of this paper is to represent the ability of the Modified Newmark Method in analyzing flexural-torsional instability of struts for both bifurcation and non-bifurcation structural systems. The results are shown to be very accurate with only a small number of iterations. The method is easily programmed, and has the advantages of simplicity and speeds of convergence and easily is extended to treat material and geometric nonlinearity including no prismatic members and linear and nonlinear spring restraints that would be encountered in frames. In this paper, these abilities of the method will be extended to the system of linear differential equations that govern strut flexural torsional stability.

Keywords: instability, torsion, flexural, buckling, modified newmark method stability

Procedia PDF Downloads 359
17530 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network

Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba

Abstract:

Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.

Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network

Procedia PDF Downloads 233