Search results for: characteristic method
19324 Determines of Professional Competencies among Newly Registered Nurses in Teaching Hospital in Kingdom of Saudi Arabia
Authors: Rana Alkattan
Abstract:
Aim: This study aims to identify and analyze the factors predicting the professional clinical competency among newly recruited registered nurses. In addition, it aims to explore factors significantly correlated with high and low professional clinical competency score. Method: A descriptive analytical is applied in this study, cross-sectional which conducted between June 2012 and June 2013 at King Abdulaziz University Hospital, as one of the largest governmental university tertiary Hospital in Saudi Arabia. A survey questionnaire was designed to collect data. And then, data were analyzed using the SPSS. Results: A total of the 86 nurses provided valid responses. 69 were female and 17 were male. The majority of the participants in this study were married, from the Philippines, between 20-29 years old. The majority had certified university bachelor’s degree in nursing, as well as had prior experience in nursing between 1 to 5 years. There are two categories emerged from the data, which significantly correlated with nurses' professional competence and development. The first was the newly employed registered nurses demographic characteristic (correlation coefficients 0.154 to 0.470, P < 0.05), while the second was the list of studied environmental factors except 'job rotation factor' (correlation coefficients 0.122 to 0.540, P < 0.01). However, nurses' attitude including motivation and confidence were not associated with nurse's professional competency. Conclusion: that nurses' professional competence development is a process affected by certain personal demographic and environmental factors which will enable newly graduates nurses to provide safe effective patients' care and maintain their career responsibilities.Keywords: clinical, competence, development nurses professional, registered
Procedia PDF Downloads 35519323 Bubble Point Pressures of CO2+Ethyl Palmitate by a Cubic Equation of State and the Wong-Sandler Mixing Rule
Authors: M. A. Sedghamiz, S. Raeissi
Abstract:
This study presents three different approaches to estimate bubble point pressures for the binary system of CO2 and ethyl palmitate fatty acid ethyl ester. The first method involves the Peng-Robinson (PR) Equation of State (EoS) with the conventional mixing rule of Van der Waals. The second approach involves the PR EOS together with the Wong Sandler (WS) mixing rule, coupled with the Uniquac Ge model. In order to model the bubble point pressures with this approach, the volume and area parameter for ethyl palmitate were estimated by the Hansen group contribution method. The last method involved the Peng-Robinson, combined with the Wong-Sandler Method, but using NRTL as the GE model. Results using the Van der Waals mixing rule clearly indicated that this method has the largest errors among all three methods, with errors in the range of 3.96–6.22 %. The Pr-Ws-Uniquac method exhibited small errors, with average absolute deviations between 0.95 to 1.97 percent. The Pr-Ws-Nrtl method led to the least errors where average absolute deviations ranged between 0.65-1.7%.Keywords: bubble pressure, Gibbs excess energy model, mixing rule, CO2 solubility, ethyl palmitate
Procedia PDF Downloads 47419322 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching
Authors: Gianna Zou
Abstract:
Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.Keywords: BART, Bayesian, matching, regression
Procedia PDF Downloads 14719321 Finite Element Method for Solving the Generalized RLW Equation
Authors: Abdel-Maksoud Abdel-Kader Soliman
Abstract:
The General Regularized Long Wave (GRLW) equation is solved numerically by giving a new algorithm based on collocation method using quartic B-splines at the mid-knot points as element shape. Also, we use the Fourth Runge-Kutta method for solving the system of first order ordinary differential equations instead of finite difference method. Our test problems, including the migration and interaction of solitary waves, are used to validate the algorithm which is found to be accurate and efficient. The three invariants of the motion are evaluated to determine the conservation properties of the algorithm.Keywords: generalized RLW equation, solitons, quartic b-spline, nonlinear partial differential equations, difference equations
Procedia PDF Downloads 48919320 An Implementation of Meshless Method for Modeling an Elastoplasticity Coupled to Damage
Authors: Sendi Zohra, Belhadjsalah Hedi, Labergere Carl, Saanouni Khemais
Abstract:
The modeling of mechanical problems including both material and geometric nonlinearities with Finite Element Method (FEM) remains challenging. Meshless methods offer special properties to get rid of well-known drawbacks of the FEM. The main objective of Meshless Methods is to eliminate the difficulty of meshing and remeshing the entire structure by simply insertion or deletion of nodes, and alleviate other problems associated with the FEM, such as element distortion, locking and others. In this study, a robust numerical implementation of an Element Free Galerkin Method for an elastoplastic coupled to damage problem is presented. Several results issued from the numerical simulations by a DynamicExplicit resolution scheme are analyzed and critically compared with Element Finite Method results. Finally, different numerical examples are carried out to demonstrate the efficiency of this method.Keywords: damage, dynamic explicit, elastoplasticity, isotropic hardening, meshless
Procedia PDF Downloads 29419319 Performance of the Strong Stability Method in the Univariate Classical Risk Model
Authors: Safia Hocine, Zina Benouaret, Djamil A¨ıssani
Abstract:
In this paper, we study the performance of the strong stability method of the univariate classical risk model. We interest to the stability bounds established using two approaches. The first based on the strong stability method developed for a general Markov chains. The second approach based on the regenerative processes theory . By adopting an algorithmic procedure, we study the performance of the stability method in the case of exponential distribution claim amounts. After presenting numerically and graphically the stability bounds, an interpretation and comparison of the results have been done.Keywords: Marcov chain, regenerative process, risk model, ruin probability, strong stability
Procedia PDF Downloads 32419318 Spatial Analysis of Flood Vulnerability in Highly Urbanized Area: A Case Study in Taipei City
Authors: Liang Weichien
Abstract:
Without adequate information and mitigation plan for natural disaster, the risk to urban populated areas will increase in the future as populations grow, especially in Taiwan. Taiwan is recognized as the world's high-risk areas, where an average of 5.7 times of floods occur per year should seek to strengthen coherence and consensus in how cities can plan for flood and climate change. Therefore, this study aims at understanding the vulnerability to flooding in Taipei city, Taiwan, by creating indicators and calculating the vulnerability of each study units. The indicators were grouped into sensitivity and adaptive capacity based on the definition of vulnerability of Intergovernmental Panel on Climate Change. The indicators were weighted by using Principal Component Analysis. However, current researches were based on the assumption that the composition and influence of the indicators were the same in different areas. This disregarded spatial correlation that might result in inaccurate explanation on local vulnerability. The study used Geographically Weighted Principal Component Analysis by adding geographic weighting matrix as weighting to get the different main flood impact characteristic in different areas. Cross Validation Method and Akaike Information Criterion were used to decide bandwidth and Gaussian Pattern as the bandwidth weight scheme. The ultimate outcome can be used for the reduction of damage potential by integrating the outputs into local mitigation plan and urban planning.Keywords: flood vulnerability, geographically weighted principal components analysis, GWPCA, highly urbanized area, spatial correlation
Procedia PDF Downloads 28619317 The Application of the Security Audit Method on the Selected Objects of Critical Infrastructure
Authors: Michaela Vašková
Abstract:
The paper is focused on the application of the security audit method on the selected objects of the critical infrastructure. The emphasis is put on security audit method to find gaps in the critical infrastructure security. The theoretical part describes objects of the critical infrastructure. The practical part describes using the security audit method. The main emphasis was put on the protection of the critical infrastructure in the Czech Republic.Keywords: crisis management, critical infrastructure, object of critical infrastructure, security audit, extraordinary event
Procedia PDF Downloads 43119316 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 10019315 Stability of Composite Struts Using the Modified Newmark Method
Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi
Abstract:
The aim of this paper is to examine the behavior of elastic stability of reinforced and composite concrete struts with axial loads. The objective of this study is to verify the ability of the Modified Newmark Method to include geometric non-linearity in addition to non-linearity due to cracking, and also to show the advantage of the established method to reconsider an ignored minor parameter in mathematical modeling, such as the effect of the cracking by extra geometric bending moment Ny on cross-section properties. The purpose of this investigation is not to present some new results for the instability of reinforced or composite concrete columns. Therefore, no kinds of non-linearity involved in the problem are considered here. Only as mentioned, it is a part of the verification of the new established method to solve two kinds of non-linearity P- δ effect and cracking together simultaneously. However, the Modified Newmark Method can be used to solve non-linearity of materials and time-dependent behavior of concrete. However, since it is out of the scope of this article, it is not considered.Keywords: stability, buckling, modified newmark method, reinforced
Procedia PDF Downloads 33219314 Implementation of Integer Sub-Decomposition Method on Elliptic Curves with J-Invariant 1728
Authors: Siti Noor Farwina Anwar, Hailiza Kamarulhaili
Abstract:
In this paper, we present the idea of implementing the Integer Sub-Decomposition (ISD) method on elliptic curves with j-invariant 1728. The ISD method was proposed in 2013 to compute scalar multiplication in elliptic curves, which remains to be the most expensive operation in Elliptic Curve Cryptography (ECC). However, the original ISD method only works on integer number field and solve integer scalar multiplication. By extending the method into the complex quadratic field, we are able to solve complex multiplication and implement the ISD method on elliptic curves with j-invariant 1728. The curve with j-invariant 1728 has a unique discriminant of the imaginary quadratic field. This unique discriminant of quadratic field yields a unique efficiently computable endomorphism, which later able to speed up the computations on this curve. However, the ISD method needs three endomorphisms to be accomplished. Hence, we choose all three endomorphisms to be from the same imaginary quadratic field as the curve itself, where the first endomorphism is the unique endomorphism yield from the discriminant of the imaginary quadratic field.Keywords: efficiently computable endomorphism, elliptic scalar multiplication, j-invariant 1728, quadratic field
Procedia PDF Downloads 19919313 Comparative Study of Expository and Simulation Method of Teaching Woodwork at Federal University of Technology, Minna, Nigeria
Authors: Robert Ogbanje Okwori
Abstract:
The research studied expository and simulation method of teaching woodwork at Federal University of Technology, Minna, Niger State, Nigeria. The purpose of the study was to compare expository and simulation method of teaching woodwork and determine the method that is more effective in improving performance of students in woodwork. Two research questions and two hypotheses were formulated to guide the study. Fifteen objective questions and two theory questions were used for data collection. The questions set were on structure of timber. The study used the quasi experimental design. The population of the study consisted of 25 woodwork students of Federal University of Technology, Minna, Niger State, Nigeria and three hundred (300) level students were used for the study. The lesson plans for expository method and questions were validated by three lecturers in the Department of Industrial and Technology Education, Federal University of Technology, Minna, Nigeria. The validators checked the appropriates of test items and all the corrections and inputs were effected before administration of the instrument. Data obtained were analyzed using mean, standard deviation and t-test statistical tool. The null hypotheses were formulated and tested using t-test statistics at 0.05 level of significance. The findings of the study showed that simulation method of teaching has improved students’ performance in woodwork and the performance of the students was not influenced by gender. Based on the findings of the study, it was concluded that there was a significant difference in the mean achievement scores of students taught woodwork using simulation method. This implies that simulation method is more effective than expository method of teaching woodwork. Therefore, woodwork teachers should adopt simulation method of teaching woodwork towards better performance. It was recommended that simulation method should be used by woodwork lecturers to teach woodwork since students perform better using the method and also the teachers needs to be trained and re-trained in using simulation method for teaching woodwork. Teachers should be encouraged to use simulation method for their instructional delivery because it will allow them to identify their areas of strength and weakness when imparting knowledge to woodwork students. Government and different agencies should assist in procuring materials and equipment for wood workshops to enable students effectively practice what they have been taught using simulation method.Keywords: comparative, expository, simulation, woodwork
Procedia PDF Downloads 42519312 Intensity-Enhanced Super-Resolution Amplitude Apodization Effect on the Non-Spherical Near-Field Particle-Lenses
Authors: Liyang Yue, Bing Yan, James N. Monks, Rakesh Dhama, Zengbo Wang, Oleg V. Minin, Igor V. Minin
Abstract:
A particle can function as a refractive lens to focus a plane wave, generating a narrow, high intensive, weak-diverging beam within a sub-wavelength volume, known as the ‘photonic jet’. Refractive index contrast (particle to background media) and scaling effect of the dielectric particle (relative-to-wavelength size) play key roles in photonic jet formation, rather than the shape of particle-lens. Waist (full width of half maximum, FWHM) of a photonic jet could be beyond the diffraction limit and smaller than the Airy disk, which defines the minimum distance between two objects to be imaged as two instead of one. Many important applications for imaging and sensing have been afforded based upon the super-resolution characteristic of the photonic jet. It is known that apodization method, in the form of an amplitude pupil-mask centrally situated on a particle-lens, can further reduce the waist of a photonic nanojet, however, usually lower its intensity at the focus due to blocking of the incident light. In this paper, the anomalously intensity-enhanced apodization effect was discovered in the near-field via numerical simulation. It was also experimentally verified by a scale model using a copper-masked Teflon cuboid solid immersion lens (SIL) with 22 mm side length under radiation of a plane wave with 8 mm wavelength. Peak intensity enhancement and the lateral resolution of the produced photonic jet increased by about 36.0 % and 36.4 % in this approach, respectively. This phenomenon may possess the scale effect and would be valid in multiple frequency bands.Keywords: apodization, particle-lens, scattering, near-field optics
Procedia PDF Downloads 19119311 Applications of Probabilistic Interpolation via Orthogonal Matrices
Authors: Dariusz Jacek Jakóbczak
Abstract:
Mathematics and computer science are interested in methods of 2D curve interpolation and extrapolation using the set of key points (knots). A proposed method of Hurwitz- Radon Matrices (MHR) is such a method. This novel method is based on the family of Hurwitz-Radon (HR) matrices which possess columns composed of orthogonal vectors. Two-dimensional curve is interpolated via different functions as probability distribution functions: polynomial, sinus, cosine, tangent, cotangent, logarithm, exponent, arcsin, arccos, arctan, arcctg or power function, also inverse functions. It is shown how to build the orthogonal matrix operator and how to use it in a process of curve reconstruction.Keywords: 2D data interpolation, hurwitz-radon matrices, MHR method, probabilistic modeling, curve extrapolation
Procedia PDF Downloads 52519310 Track Initiation Method Based on Multi-Algorithm Fusion Learning of 1DCNN And Bi-LSTM
Abstract:
Aiming at the problem of high-density clutter and interference affecting radar detection target track initiation in ECM and complex radar mission, the traditional radar target track initiation method has been difficult to adapt. To this end, we propose a multi-algorithm fusion learning track initiation algorithm, which transforms the track initiation problem into a true-false track discrimination problem, and designs an algorithm based on 1DCNN(One-Dimensional CNN)combined with Bi-LSTM (Bi-Directional Long Short-Term Memory )for fusion classification. The experimental dataset consists of real trajectories obtained from a certain type of three-coordinate radar measurements, and the experiments are compared with traditional trajectory initiation methods such as rule-based method, logical-based method and Hough-transform-based method. The simulation results show that the overall performance of the multi-algorithm fusion learning track initiation algorithm is significantly better than that of the traditional method, and the real track initiation rate can be effectively improved under high clutter density with the average initiation time similar to the logical method.Keywords: track initiation, multi-algorithm fusion, 1DCNN, Bi-LSTM
Procedia PDF Downloads 9419309 Vibroacoustic Modulation with Chirp Signal
Authors: Dong Liu
Abstract:
By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples.Keywords: vibroacoustic modulation, nonlinear acoustic modulation, nonlinear acoustic NDT&E, signal processing, structural health monitoring
Procedia PDF Downloads 9919308 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints
Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno
Abstract:
Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.Keywords: battery energy storage, power system stability, system strength, weak power system
Procedia PDF Downloads 6119307 Simulation of Uniaxial Ratcheting Behaviors of SA508-3 Steel at Elevated Temperature
Authors: Jun Tian, Yu Yang, Liping Zhang, Qianhua Kan
Abstract:
Experimental results show that SA 508-3 steel exhibits temperature dependent cyclic softening characteristic and obvious ratcheting behaviors, and dynamic strain age was observed at temperature range of 200 ºC to 350 ºC. Based on these observations, a temperature dependent cyclic plastic constitutive model was proposed by introducing the nonlinear cyclic softening and kinematic hardening rules, and the dynamic strain age was also considered into the constitutive model. Comparisons between experiments and simulations were carried out to validate the proposed model at elevated temperature.Keywords: constitutive model, elevated temperature, ratcheting, SA 508-3
Procedia PDF Downloads 30219306 Analysis and Control of Camera Type Weft Straightener
Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae
Abstract:
In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.Keywords: camera type weft straightener, structure analysis, control, skew and bow roller
Procedia PDF Downloads 29219305 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 33519304 Suitable Tuning Method Selection for PID Controller Used in Digital Excitation System of Brushless Synchronous Generator
Authors: Deepak M. Sajnekar, S. B. Deshpande, R. M. Mohril
Abstract:
At present many rotary excitation control system are using analog type of Automatic Voltage Regulator which now started to replace with the digital automatic voltage regulator which is provided with PID controller and tuning of PID controller is a challenging task. The cases where digital excitation control system is used tuning of PID controller are still carried out by pole placement method. Tuning of PID controller used for static excitation control system is not challenging because it does not involve exciter time constant. This paper discusses two methods of tuning PID controller i.e. Pole placement method and pole zero cancellation method. GUI prepared for both the methods on the platform of MATLAB. Using this GUI, performance results and time required for tuning for both the methods are compared. Sensitivity of the methods is also presented with parameter variation like loop gain ‘K’ and exciter time constant ‘te’.Keywords: digital excitation system, automatic voltage regulator, pole placement method, pole zero cancellation method
Procedia PDF Downloads 67819303 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation
Procedia PDF Downloads 34919302 A Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
Authors: B. Kishore Kumar, Rakesh Pogula, T. Kishore Kumar
Abstract:
The steepness of an audio signal which is produced by the musical instruments, specifically percussive instruments is the perception of how high tone or low tone which can be considered as a frequency closely related to the fundamental frequency. This paper presents a novel method for silence removal and segmentation of music signals produced by the percussive instruments and the performance of proposed method is studied with the help of MATLAB simulations. This method is based on two simple features, namely the signal energy and the spectral centroid. As long as the feature sequences are extracted, a simple thresholding criterion is applied in order to remove the silence areas in the sound signal. The simulations were carried on various instruments like drum, flute and guitar and results of the proposed method were analyzed.Keywords: percussive instruments, spectral energy, spectral centroid, silence removal
Procedia PDF Downloads 41119301 Building and Tree Detection Using Multiscale Matched Filtering
Authors: Abdullah H. Özcan, Dilara Hisar, Yetkin Sayar, Cem Ünsalan
Abstract:
In this study, an automated building and tree detection method is proposed using DSM data and true orthophoto image. A multiscale matched filtering is used on DSM data. Therefore, first watershed transform is applied. Then, Otsu’s thresholding method is used as an adaptive threshold to segment each watershed region. Detected objects are masked with NDVI to separate buildings and trees. The proposed method is able to detect buildings and trees without entering any elevation threshold. We tested our method on ISPRS semantic labeling dataset and obtained promising results.Keywords: building detection, local maximum filtering, matched filtering, multiscale
Procedia PDF Downloads 32019300 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 14119299 A Generalization of the Secret Sharing Scheme Codes Over Certain Ring
Authors: Ibrahim Özbek, Erdoğan Mehmet Özkan
Abstract:
In this study, we generalize (k,n) threshold secret sharing scheme on the study Ozbek and Siap to the codes over the ring Fq+ αFq. In this way, it is mentioned that the method obtained in that article can also be used on codes over rings, and new advantages to be obtained. The method of securely sharing the key in cryptography, which Shamir first systematized and Massey carried over to codes, became usable for all error-correcting codes. The firewall of this scheme is based on the hardness of the syndrome decoding problem. Also, an open study area is left for those working for other rings and code classes. All codes that correct errors with this method have been the working area of this method.Keywords: secret sharing scheme, linear codes, algebra, finite rings
Procedia PDF Downloads 7319298 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS
Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong
Abstract:
With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition
Procedia PDF Downloads 36819297 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots
Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra
Abstract:
Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation
Procedia PDF Downloads 18919296 'When 2 + 2 = 5: Synergistic Effects of HRM Practices on the Organizational Performance'
Authors: Qura-tul-aain Khair, Mohtsham Saeed
Abstract:
Synergy is a main characteristic of human resource management (HRM) system. It highlights the hidden characteristics of HRM system. This research paper has empirically tested that internally consistent and complementary HR practices/components in the HR system are more able to predict and enhance the organizational performance than the sum of individual practice. The data was collected from the sample of 109 firm respondents of service industry through convenience sampling technique. The major finding of this research highlighted that configurational approach to synergy or the HRM system as a whole has an ability to enhance the organizational performance more than by the sum of individual HRM practices of HRM system. Hence, confirming that the whole is greater than the sum of individual parts.Keywords: internally consistant HRM practices, synergistic effects, horizontal fit, vertical fit
Procedia PDF Downloads 35419295 Applying Element Free Galerkin Method on Beam and Plate
Authors: Mahdad M’hamed, Belaidi Idir
Abstract:
This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate holeKeywords: numerical computation, element-free Galerkin (EFG), moving least squares (MLS), meshless methods
Procedia PDF Downloads 283