Search results for: optimization process
5449 The Influences of Accountants’ Potential Performance on Their Working Process: Government Savings Bank, Northeast, Thailand
Authors: Prateep Wajeetongratana
Abstract:
The purpose of this research was to study the influence of accountants’ potential performance on their working process, a case study of Government Savings Banks in the northeast of Thailand. The independent variables included accounting knowledge, accounting skill, accounting value, accounting ethics, and accounting attitude, while the dependent variable included the success of the working process. A total of 155 accountants working for Government Savings Banks were selected by random sampling. A questionnaire was used as a tool for collecting data. Descriptive statistics in this research included percentage, mean, and multiple regression analyses.
The findings revealed that the majority of accountants were female with an age between 35-40 years old. Most of the respondents had an undergraduate degree with ten years of experience. Moreover, the factors of accounting knowledge, accounting skill, accounting a value and accounting ethics and accounting attitude were rated at a high level. The findings from regression analysis of observation data revealed a causal relationship in that the observation data could explain at least 51 percent of the success in the accountants’ working process.
Keywords: Influence, Potential Performance, Success, Working Process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13935448 Performance Optimization of Data Mining Application Using Radial Basis Function Classifier
Authors: M. Govindarajan, R. M.Chandrasekaran
Abstract:
Text data mining is a process of exploratory data analysis. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. This paper describes proposed radial basis function Classifier that performs comparative crossvalidation for existing radial basis function Classifier. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: direct Marketing. Direct marketing has become an important application field of data mining. Comparative Cross-validation involves estimation of accuracy by either stratified k-fold cross-validation or equivalent repeated random subsampling. While the proposed method may have high bias; its performance (accuracy estimation in our case) may be poor due to high variance. Thus the accuracy with proposed radial basis function Classifier was less than with the existing radial basis function Classifier. However there is smaller the improvement in runtime and larger improvement in precision and recall. In the proposed method Classification accuracy and prediction accuracy are determined where the prediction accuracy is comparatively high.Keywords: Text Data Mining, Comparative Cross-validation, Radial Basis Function, runtime, accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15545447 Structural Damage Detection via Incomplete Modal Data Using Output Data Only
Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26975446 Finite Element Analysis of the Blanking and Stamping Processes of Nuclear Fuel Spacer Grids
Authors: R. O. Santos, L. P. Moreira, M. C. Cardoso
Abstract:
Spacer grid assembly supporting the nuclear fuel rods is an important concern in the design of structural components of a Pressurized Water Reactor (PWR). The spacer grid is composed by springs and dimples which are formed from a strip sheet by means of blanking and stamping processes. In this paper, the blanking process and tooling parameters are evaluated by means of a 2D plane-strain finite element model in order to evaluate the punch load and quality of the sheared edges of Inconel 718 strips used for nuclear spacer grids. A 3D finite element model is also proposed to predict the tooling loads resulting from the stamping process of a preformed Inconel 718 strip and to analyse the residual stress effects upon the spring and dimple design geometries of a nuclear spacer grid.Keywords: Blanking process, damage model, finite element modelling, Inconel 718, spacer grids, stamping process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27875445 Modelling and Control of Milk Fermentation Process in Biochemical Reactor
Authors: Jožef Ritonja
Abstract:
The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.Keywords: Bioprocess engineering, biochemical reactor, fermentation process, modeling, adaptive control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14825444 Discovering Complex Regularities: from Tree to Semi-Lattice Classifications
Authors: A. Faro, D. Giordano, F. Maiorana
Abstract:
Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optimize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is able to automatically suggest a strategy to optimize the number of classes optimization, but also support both tree classifications and semi-lattice organizations of the classes to give to the users the possibility of passing from one class to the ones with which it has some aspects in common. Examples of using tree and semi-lattice classifications are given to illustrate advantages and problems. The tool is applied to classify macroeconomic data that report the most developed countries- import and export. It is possible to classify the countries based on their economic behaviour and use the tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation. Possible interrelationships between the classes and their meaning are also discussed.Keywords: Unsupervised classification, Kohonen networks, macroeconomics, Visual data mining, Cluster interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15425443 Project Management at University: Towards an Evaluation Process around Cooperative Learning
Authors: J. L. Andrade-Pineda, J.M. León-Blanco, M. Calle, P. L. González-R
Abstract:
The enrollment in current Master's degree programs usually pursues gaining the expertise required in real-life workplaces. The experience we present here concerns the learning process of "Project Management Methodology (PMM)", around a cooperative/collaborative mechanism aimed at affording students measurable learning goals and providing the teacher with the ability of focusing on the weaknesses detected. We have designed a mixed summative/formative evaluation, which assures curriculum engage while enriches the comprehension of PMM key concepts. In this experience we converted the students into active actors in the evaluation process itself and we endowed ourselves as teachers with a flexible process in which along with qualifications (score), other attitudinal feedback arises. Despite the high level of self-affirmation on their discussion within the interactive assessment sessions, they ultimately have exhibited a great ability to review and correct the wrong reasoning when that was the case.
Keywords: Cooperative-collaborative learning, educational management, formative-summative assessment, leadership training.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13625442 Improvement of Photoluminescence Uniformity of Porous Silicon by using Stirring Anodization Process
Authors: Jia-Chuan Lin, Meng-Kai Hsu, Hsi-Ting Hou, Sin-Hong Liu
Abstract:
The electrolyte stirring method of anodization etching process for manufacturing porous silicon (PS) is reported in this work. Two experimental setups of nature air stirring (PS-ASM) and electrolyte stirring (PS-ESM) are employed to clarify the influence of stirring mechanisms on electrochemical etching process. Compared to traditional fabrication without any stirring apparatus (PS-TM), a large plateau region of PS surface structure is obtained from samples with both stirring methods by the 3D-profiler measurement. Moreover, the light emission response is also improved by both proposed electrolyte stirring methods due to the cycling force in electrolyte could effectively enhance etch-carrier distribution while the electrochemical etching process is made. According to the analysis of statistical calculation of photoluminescence (PL) intensity, lower standard deviations are obtained from PS-samples with studied stirring methods, i.e. the uniformity of PL-intensity is effectively improved. The calculated deviations of PL-intensity are 93.2, 74.5 and 64, respectively, for PS-TM, PS-ASM and PS-ESM.Keywords: Porous Silicon, Photoluminescence, Uniformity Carrier Stirring Method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18255441 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: A. Lauvray, F. Poulhaon, P. Michaud, P. Joyot, E. Duc
Abstract:
Additive Friction Stir Manufacturing, or AFSM, is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. There is still a lack in understanding of the physical phenomena taking place during the process. This research aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system due to pure friction. An analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable, due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes through a numerical modeling followed by an experimental validation to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.
Keywords: numerical model, additive manufacturing, frictional heat generation, process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5165440 Discussion about Frequent Adjustment of Urban Master Planning in China: A Case Study of Changshou District, Chongqing City
Authors: Sun Ailu, Zhao Wanmin
Abstract:
Since the reform and opening, the urbanization process of China has entered a rapid development period. In recent years, the authors participated in some projects of urban master planning in China and found a phenomenon that the rapid urbanization area of China is experiencing frequent adjustment process of urban master planning. This phenomenon is not the natural process of urbanization development. It may be caused by different government roles from different levels. Through the methods of investigation, data comparison and case study, this paper aims to explore the reason why the rapid urbanization area is experiencing frequent adjustment of master planning and give some solution strategies. Firstly, taking Changshou district of Chongqing city as an example, this paper wants to introduce the phenomenon about frequent adjustment process in China. And then, discuss distinct roles in the process between national government, provincial government and local government of China. At last, put forward preliminary solutions strategies for this area in China from the aspects of land use, intergovernmental cooperation and so on.
Keywords: Urban master planning, frequent adjustment, urbanization development, problems and strategies, China.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10905439 Modeling Oxygen-transfer by Multiple Plunging Jets using Support Vector Machines and Gaussian Process Regression Techniques
Authors: Surinder Deswal
Abstract:
The paper investigates the potential of support vector machines and Gaussian process based regression approaches to model the oxygen–transfer capacity from experimental data of multiple plunging jets oxygenation systems. The results suggest the utility of both the modeling techniques in the prediction of the overall volumetric oxygen transfer coefficient (KLa) from operational parameters of multiple plunging jets oxygenation system. The correlation coefficient root mean square error and coefficient of determination values of 0.971, 0.002 and 0.945 respectively were achieved by support vector machine in comparison to values of 0.960, 0.002 and 0.920 respectively achieved by Gaussian process regression. Further, the performances of both these regression approaches in predicting the overall volumetric oxygen transfer coefficient was compared with the empirical relationship for multiple plunging jets. A comparison of results suggests that support vector machines approach works well in comparison to both empirical relationship and Gaussian process approaches, and could successfully be employed in modeling oxygen-transfer.Keywords: Oxygen-transfer, multiple plunging jets, support vector machines, Gaussian process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16395438 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.
Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13735437 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.
Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5155436 Taking People, Process and Partnership on Board for Participatory Decision Making
Authors: B. Mikulskienė
Abstract:
Public administration institutions in cooperation with politicians are not the sole policy decision makers in full meaning any longer. Meanwhile, a special role, namely steering the decision making process, could be delegated to them. Despite the wide scientific discussion on different aspects what has direct impact on policy creation, there is a lack of holistic practical managerial advice, which could integrate infrastructure of policy decision making with intellectual capital and with interconnection of partnership. The proposed harmonized decision making model of process, people and partnership entitled by acronym HM-3P is analyzed as a framework for implementation of public administration steering role seeking the coherent social involvement in policy decision making.Keywords: participatory decision making, partnership, stakeholders.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14545435 Efficiency of Membrane Distillation to Produce Fresh Water
Authors: Sabri Mrayed, David Maccioni, Greg Leslie
Abstract:
Seawater desalination has been accepted as one of the most effective solutions to the growing problem of a diminishing clean drinking water supply. Currently two desalination technologies dominate the market – the thermally driven multi-stage flash distillation (MSF) and the membrane based reverse osmosis (RO). However, in recent years membrane distillation (MD) has emerged as a potential alternative to the established means of desalination. This research project intended to determine the viability of MD as an alternative process to MSF and RO for seawater desalination. Specifically the project involves conducting thermodynamic analysis of the process based on the second law of thermodynamics to determine the efficiency of the MD. Data was obtained from experiments carried out on a laboratory rig. To determine exergy values required for the exergy analysis, two separate models were built in Engineering Equation Solver – the ’Minimum Separation Work Model’ and the ‘Stream Exergy Model’. The efficiency of MD process was found to be 17.3 % and the energy consumption was determined to be 4.5 kWh to produce one cubic meter of fresh water. The results indicate MD has potential as a technique for seawater desalination compared to RO and MSF. However it was shown that this was only the case if an alternate energy source such as green or waste energy was available to provide the thermal energy input to the process. If the process was required to power itself, it was shown to be highly inefficient and in no way thermodynamically viable as a commercial desalination process.
Keywords: Desalination, Exergy, Membrane distillation, Second law efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23305434 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model
Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa
Abstract:
In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.
Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7755433 Determination of the Economic Planning Depth for Assembly Process Planning
Authors: A. Kampker, P. Burggräf, Y. Bäumers
Abstract:
In order to be competitive, companies have to reduce their production costs while meeting increasing quality requirements. Therefore, companies try to plan their assembly processes as detailed as possible. However, increasing product individualization leading to a higher number of variants, smaller batch sizes and shorter product life cycles raise the question to what extent the effort of detailed planning is still justified. An important approach in this field of research is the concept of determining the economic planning depth for assembly process planning based on production specific influencing factors. In this paper first solution hypotheses as well as a first draft of the resulting method will be presented.Keywords: Assembly process planning, economic planning depth, planning benefit, planning effort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21215432 Realization of Design Features for Linear Flow Splitting in NX 6
Authors: Anselm L. Schüle, Thomas Rollmann, Reiner Anderl
Abstract:
Within the collaborative research center 666 a new product development approach and the innovative manufacturing method of linear flow splitting are being developed. So far the design process is supported by 3D-CAD models utilizing User Defined Features in standard CAD-Systems. This paper now presents new functions for generating 3D-models of integral sheet metal products with bifurcations using Siemens PLM NX 6. The emphasis is placed on design and semi-automated insertion of User Defined Features. Therefore User Defined Features for both, linear flow splitting and its derivative linear bend splitting, were developed. In order to facilitate the modeling process, an application was developed that guides through the insertion process. Its usability and dialog layout adapt known standard features. The work presented here has significant implications on the quality, accurateness and efficiency of the product generation process of sheet metal products with higher order bifurcations.Keywords: Linear Flow Splitting, CRC 666, User Defined Features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24815431 A Study on the Impacts of Computer Aided Design on the Architectural Design Process
Authors: Halleh Nejadriahi, Kamyar Arab
Abstract:
Computer-aided design (CAD) tools have been extensively used by the architects for the several decades. It has evolved from being a simple drafting tool to being an intelligent architectural software and a powerful means of communication for architects. CAD plays an essential role in the profession of architecture and is a basic tool for any architectural firm. It is not possible for an architectural firm to compete without taking the advantage of computer software, due to the high demand and competition in the architectural industry. The aim of this study is to evaluate the impacts of CAD on the architectural design process from conceptual level to final product, particularly in architectural practice. It examines the range of benefits of integrating CAD into the industry and discusses the possible defects limiting the architects. Method of this study is qualitatively based on data collected from the professionals’ perspective. The identified benefits and limitations of CAD on the architectural design process will raise the awareness of professionals on the potentials of CAD and proper utilization of that in the industry, which would result in a higher productivity along with a better quality in the architectural offices.
Keywords: Architecture, architectural practice, computer aided design, CAD, design process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19165430 Design Process of the Fixing Pipes in the Guide Pipe Anchor System for Cable-Stayed Bridges
Authors: Jinwoong Choi, Sun-Kyu Park, Sungnam Hong
Abstract:
For the efficient and safe use of the cable-stayed bridge, a design based on the detailed local analysis of the cable anchor system is required. Also, a theoretical design process for the anchor system should be prepared and reviewed. Generally, the size of the fixing pipe in the anchor system is decided according to the specifications prepared by cable-manufacturing companies, and accordingly, there is difficulty determining the initial inner diameters of the fixing pipes. As such, there is no choice but to use the products with the existing sizes. In this study, the existing design process of the fixing pipe, is a type of guide pipe anchor in the cable anchor system, is reviewed, a formula determining the thickness of the fixing pipe is proposed, and the convenience and validity of the suggested equation is compared with the results of the existing designs to verify its convenience and validity.Keywords: Cable-stayed bridge; Guide pipe anchor system; Fixing pipe; Theoretical design process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33095429 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.
Keywords: Human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, Prior distribution and approximate posterior distribution, KTH dataset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10055428 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods
Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar
Abstract:
Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.
Keywords: Bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10185427 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7745426 Innovation, e-Learning and Higher Education: An Example of a University- LMS Adoption Process
Authors: Ana Mafalda Gonçalves, Neuza Pedro
Abstract:
The evolution of ICT has changed all sections of society and these changes have been creating an irreversible impact on higher education institutions, which are expected to adopt innovative technologies in their teaching practices. As theorical framework this study select Rogers theory of innovation diffusion which is widely used to illustrate how technologies move from a localized invented to a widespread evolution on organizational practices. Based on descriptive statistical data collected in a European higher education institution three years longitudinal study was conducted for analyzing and discussion the different stages of a LMS adoption process. Results show that ICT integration in higher education is not progressively successful and a linear process and multiple aspects must be taken into account.
Keywords: e-learning, higher education, LMS, innovation, technologies
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24685425 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes
Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua
Abstract:
Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16135424 Predictions of Values in a Causticizing Process
Authors: R. Andreola, O. A. A. Santos, L. M. M, Jorge
Abstract:
An industrial system for the production of white liquor of a paper industry, Klabin Paraná Papéis, formed by ten reactors was modeled, simulated, and analyzed. The developed model considered possible water losses by evaporation and reaction, in addition to variations in volumetric flow of lime mud across the reactors due to composition variations. The model predictions agreed well with the process measurements at the plant and the results showed that the slaking reaction is nearly complete at the third causticizing reactor, while causticizing ends by the seventh reactor. Water loss due to slaking reaction and evaporation occurs more pronouncedly in the slaking reaction than in the final causticizing reactors; nevertheless, the lime mud flow remains nearly constant across the reactors.Keywords: Causticizing, lime, prediction, process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18765423 Yield Prediction Using Support Vectors Based Under-Sampling in Semiconductor Process
Authors: Sae-Rom Pak, Seung Hwan Park, Jeong Ho Cho, Daewoong An, Cheong-Sool Park, Jun Seok Kim, Jun-Geol Baek
Abstract:
It is important to predict yield in semiconductor test process in order to increase yield. In this study, yield prediction means finding out defective die, wafer or lot effectively. Semiconductor test process consists of some test steps and each test includes various test items. In other world, test data has a big and complicated characteristic. It also is disproportionably distributed as the number of data belonging to FAIL class is extremely low. For yield prediction, general data mining techniques have a limitation without any data preprocessing due to eigen properties of test data. Therefore, this study proposes an under-sampling method using support vector machine (SVM) to eliminate an imbalanced characteristic. For evaluating a performance, randomly under-sampling method is compared with the proposed method using actual semiconductor test data. As a result, sampling method using SVM is effective in generating robust model for yield prediction.
Keywords: Yield Prediction, Semiconductor Test Process, Support Vector Machine, Under Sampling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23975422 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter
Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas
Abstract:
This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.
Keywords: Biomass concentration, Extended Kalman Filter, Particle Filter, State estimation, Specific growth rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29535421 Developing Online Bookstore to Facilitate Manual Process – UTP Case Study
Authors: Emelia Akashah P.A, Sharifah Nadiah S.A
Abstract:
Knowledge sharing enables the information or knowledge to be transmitted from one source to another. This paper demonstrates the needs of having the online book catalogue which can be used to facilitate disseminating information on textbook used in the university. This project is aimed to give access to the students and lecturers to the list of books in the bookstore and at the same time to allow book reviewing without having to visit the bookstore physically. Research is carried out according to the boundaries which accounts to current process of new book purchasing, current system used by the bookstore and current process the lecturers go through for reviewing textbooks. The questionnaire is used to gather the requirements and it is distributed to 100 students and 40 lecturers. This project has enabled the improvement of a manual process to be carried out automatically, through a web based platform. It is shown based on the user acceptance survey carried out that target groups found that this web service is feasible to be implemented in Universiti Teknologi PETRONAS (UTP), and they have shown positive signs of interest in utilizing it in the future.Keywords: bookstore, knowledge sharing, online bookcatalogue, textbook
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42505420 Service Flow in Multilayer Networks: A Method for Evaluating the Layout of Urban Medical Resources
Authors: Guanglin Song
Abstract:
Situated within the context of China's tiered medical treatment system, this study aims to analyze spatial causes of urban healthcare access difficulties from the perspective of the configuration of healthcare facilities. A social network analysis approach is employed to construct a healthcare demand and supply flow network between major residential clusters and various tiers of hospitals in the city. The findings reveal that: 1) There exists overall maldistribution and over-concentration of healthcare resources in the study area, characterized by structural imbalance. 2) The low rate of primary care utilization in the study area is a key factor contributing to congestion at higher-tier hospitals, as excessive reliance on these institutions by neighboring communities exacerbates the problem. 3) Gradual optimization of the healthcare facility layout in the study area, encompassing holistic, local, and individual institutional levels, can enhance systemic efficiency and resource balance. This research proposes a method for evaluating urban healthcare resource distribution structures based on service flows within hierarchical networks. It offers spatially targeted optimization suggestions for promoting the implementation of the tiered healthcare system and alleviating challenges related to accessibility and congestion in seeking medical care. In addition, the study provides some new ideas for researchers and healthcare managers in countries, cities, and healthcare management around the world with similar challenges.
Keywords: Flow of public services, healthcare facilities, spatial planning, urban networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85