Search results for: threshold method
19020 Software Engineering Inspired Cost Estimation for Process Modelling
Authors: Felix Baumann, Aleksandar Milutinovic, Dieter Roller
Abstract:
Up to this point business process management projects in general and business process modelling projects in particular could not rely on a practical and scientifically validated method to estimate cost and effort. Especially the model development phase is not covered by a cost estimation method or model. Further phases of business process modelling starting with implementation are covered by initial solutions which are discussed in the literature. This article proposes a method of filling this gap by deriving a cost estimation method from available methods in similar domains namely software development or software engineering. Software development is regarded as closely similar to process modelling as we show. After the proposition of this method different ideas for further analysis and validation of the method are proposed. We derive this method from COCOMO II and Function Point which are established methods of effort estimation in the domain of software development. For this we lay out similarities of the software development rocess and the process of process modelling which is a phase of the Business Process Management life-cycle.Keywords: COCOMO II, busines process modeling, cost estimation method, BPM COCOMO
Procedia PDF Downloads 44119019 Selection the Most Suitable Method for DNA Extraction from Muscle of Iran's Canned Tuna by Comparison of Different DNA Extraction Methods
Authors: Marjan Heidarzadeh
Abstract:
High quality and purity of DNA isolated from canned tuna is essential for species identification. In this study, the efficiency of five different methods for DNA extraction was compared. Method of national standard in Iran, the CTAB precipitation method, Wizard DNA Clean Up system, Nucleospin and GenomicPrep were employed. DNA was extracted from two different canned tuna in brine and oil of the same tuna species. Three samples of each type of product were analyzed with the different methods. The quantity and quality of DNA extracted was evaluated using the 260 nm absorbance and ratio A260/A280 by spectrophotometer picodrop. Results showed that the DNA extraction from canned tuna preserved in different liquid media could be optimized by employing a specific DNA extraction method in each case. Best results were obtained with CTAB method for canned tuna in oil and with Wizard method for canned tuna in brine.Keywords: canned tuna PCR, DNA, DNA extraction methods, species identification
Procedia PDF Downloads 65719018 Congestion Control in Mobile Network by Prioritizing Handoff Calls
Authors: O. A. Lawal, O. A Ojesanmi
Abstract:
The demand for wireless cellular services continues to increase while the radio resources remain limited. Thus, network operators have to continuously manage the scarce radio resources in order to have an improved quality of service for mobile users. This paper proposes how to handle the problem of congestion in the mobile network by prioritizing handoff call, using the guard channel allocation scheme. The research uses specific threshold value for the time of allocation of the channel in the algorithm. The scheme would be simulated by generating various data for different traffics in the network as it would be in the real life. The result would be used to determine the probability of handoff call dropping and the probability of the new call blocking as a way of measuring the network performance.Keywords: call block, channel, handoff, mobile cellular network
Procedia PDF Downloads 39419017 An Optimized Method for Calculating the Linear and Nonlinear Response of SDOF System Subjected to an Arbitrary Base Excitation
Authors: Hossein Kabir, Mojtaba Sadeghi
Abstract:
Finding the linear and nonlinear responses of a typical single-degree-of-freedom system (SDOF) is always being regarded as a time-consuming process. This study attempts to provide modifications in the renowned Newmark method in order to make it more time efficient than it used to be and make it more accurate by modifying the system in its own non-linear state. The efficacy of the presented method is demonstrated by assigning three base excitations such as Tabas 1978, El Centro 1940, and MEXICO CITY/SCT 1985 earthquakes to a SDOF system, that is, SDOF, to compute the strength reduction factor, yield pseudo acceleration, and ductility factor.Keywords: single-degree-of-freedom system (SDOF), linear acceleration method, nonlinear excited system, equivalent displacement method, equivalent energy method
Procedia PDF Downloads 32019016 A Semi-Implicit Phase Field Model for Droplet Evolution
Authors: M. H. Kazemi, D. Salac
Abstract:
A semi-implicit phase field method for droplet evolution is proposed. Using the phase field Cahn-Hilliard equation, we are able to track the interface in multiphase flow. The idea of a semi-implicit finite difference scheme is reviewed and employed to solve two nonlinear equations, including the Navier-Stokes and the Cahn-Hilliard equations. The use of a semi-implicit method allows us to have larger time steps compared to explicit schemes. The governing equations are coupled and then solved by a GMRES solver (generalized minimal residual method) using modified Gram-Schmidt orthogonalization. To show the validity of the method, we apply the method to the simulation of a rising droplet, a leaky dielectric drop and the coalescence of drops. The numerical solutions to the phase field model match well with existing solutions over a defined range of variables.Keywords: coalescence, leaky dielectric, numerical method, phase field, rising droplet, semi-implicit method
Procedia PDF Downloads 48219015 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis
Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson
Abstract:
Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution
Procedia PDF Downloads 18319014 Teacher’s Perception of Dalcroze Method Course as Teacher’s Enhancement Course: A Case Study in Hong Kong
Authors: Ka Lei Au
Abstract:
The Dalcroze method has been emerging in music classrooms, and music teachers are encouraged to integrate music and movement in their teaching. Music programs in colleges in Hong Kong have been introducing method courses such as Orff and Dalcroze method in music teaching as teacher’s education program. Since the targeted students of the course are music teachers who are making the decision of what approach to use in their classroom, their perception is significantly valued to identify how this approach is applicable in their teaching in regards to the teaching and learning culture and environment. This qualitative study aims to explore how the Dalcroze method as a teacher’s education course is perceived by music teachers from three aspects: 1) application in music teaching, 2) self-enhancement, 3) expectation. Through the lens of music teachers, data were collected from 30 music teachers who are taking the Dalcroze method course in music teaching in Hong Kong by the survey. The findings reveal the value and their intention of the Dalcroze method in Hong Kong. It also provides a significant reference for better development of such courses in the future in adaption to the culture, teaching and learning environment and teacher’s, student’s and parent’s perception of this approach.Keywords: Dalcroze method, music teaching, perception, self-enhancement, teacher’s education
Procedia PDF Downloads 40519013 Computation of Stress Intensity Factor Using Extended Finite Element Method
Authors: Mahmoudi Noureddine, Bouregba Rachid
Abstract:
In this paper the stress intensity factors of a slant-cracked plate of AISI 304 stainless steel, have been calculated using extended finite element method and finite element method (FEM) in ABAQUS software, the results were compared with theoretical values.Keywords: stress intensity factors, extended finite element method, stainless steel, abaqus
Procedia PDF Downloads 61819012 Spectrophotometric Methods for Simultaneous Determination of Binary Mixture of Amlodipine Besylate and Atenolol Based on Dual Wavelength
Authors: Nesrine T. Lamie
Abstract:
Four, accurate, precise, and sensitive spectrophotometric methods are developed for the simultaneous determination of a binary mixture containing amlodipine besylate (AM) and atenolol (AT) where AM is determined at its λmax 360 nm (0D), while atenolol can be determined by different methods. Method (A) is absorpotion factor (AFM). Method (B) is the new Ratio Difference method(RD) which measures the difference in amplitudes between 210 and 226 nm of ratio spectrum., Method (C) is novel constant center spectrophotometric method (CC) Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The calibration curve is linear over the concentration range of 10–80 and 4–40 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results was assessed by applying standard addition technique. The results obtained were found to agree statistically with those obtained by a reported method, showing no significant difference with respect to accuracy and precision.Keywords: amlodipine, atenolol, absorption factor, constant center, mean centering, ratio difference
Procedia PDF Downloads 30419011 Starting Order Eight Method Accurately for the Solution of First Order Initial Value Problems of Ordinary Differential Equations
Authors: James Adewale, Joshua Sunday
Abstract:
In this paper, we developed a linear multistep method, which is implemented in predictor corrector-method. The corrector is developed by method of collocation and interpretation of power series approximate solutions at some selected grid points, to give a continuous linear multistep method, which is evaluated at some selected grid points to give a discrete linear multistep method. The predictors were also developed by method of collocation and interpolation of power series approximate solution, to give a continuous linear multistep method. The continuous linear multistep method is then solved for the independent solution to give a continuous block formula, which is evaluated at some selected grid point to give discrete block method. Basic properties of the corrector were investigated and found to be zero stable, consistent and convergent. The efficiency of the method was tested on some linear, non-learn, oscillatory and stiff problems of first order, initial value problems of ordinary differential equations. The results were found to be better in terms of computer time and error bound when compared with the existing methods.Keywords: predictor, corrector, collocation, interpolation, approximate solution, independent solution, zero stable, consistent, convergent
Procedia PDF Downloads 50119010 Mapping Method to Solve a Nonlinear Schrodinger Type Equation
Authors: Edamana Vasudevan Krishnan
Abstract:
This paper studies solitons in optical materials with the help of Mapping Method. Two types of nonlinear media have been investigated, namely, the cubic nonlinearity and the quintic nonlinearity. The soliton solutions, shock wave solutions and singular solutions have been derives with certain constraint conditions.Keywords: solitons, integrability, metamaterials, mapping method
Procedia PDF Downloads 49419009 Performance Evaluation of Refinement Method for Wideband Two-Beams Formation
Authors: C. Bunsanit
Abstract:
This paper presents the refinement method for two beams formation of wideband smart antenna. The refinement method for weighting coefficients is based on Fully Spatial Signal Processing by taking Inverse Discrete Fourier Transform (IDFT), and its simulation results are presented using MATLAB. The radiation pattern is created by multiplying the incoming signal with real weights and then summing them together. These real weighting coefficients are computed by IDFT method; however, the range of weight values is relatively wide. Therefore, for reducing this range, the refinement method is used. The radiation pattern concerns with five input parameters to control. These parameters are maximum weighting coefficient, wideband signal, direction of mainbeam, beamwidth, and maximum of minor lobe level. Comparison of the obtained simulation results between using refinement method and taking only IDFT shows that the refinement method works well for wideband two beams formation.Keywords: fully spatial signal processing, beam forming, refinement method, smart antenna, weighting coefficient, wideband
Procedia PDF Downloads 22619008 A Novel Method for Solving Nonlinear Whitham–Broer–Kaup Equation System
Authors: Ayda Nikkar, Roghayye Ahmadiasl
Abstract:
In this letter, a new analytical method called homotopy perturbation method, which does not need small parameter in the equation is implemented for solving the nonlinear Whitham–Broer–Kaup (WBK) partial differential equation. In this method, a homotopy is introduced to be constructed for the equation. The initial approximations can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Comparison of the results with those of exact solution has led us to significant consequences. The results reveal that the HPM is very effective, convenient and quite accurate to systems of nonlinear equations. It is predicted that the HPM can be found widely applicable in engineering.Keywords: homotopy perturbation method, Whitham–Broer–Kaup (WBK) equation, Modified Boussinesq, Approximate Long Wave
Procedia PDF Downloads 31119007 Reduced Differential Transform Methods for Solving the Fractional Diffusion Equations
Authors: Yildiray Keskin, Omer Acan, Murat Akkus
Abstract:
In this paper, the solution of fractional diffusion equations is presented by means of the reduced differential transform method. Fractional partial differential equations have special importance in engineering and sciences. Application of reduced differential transform method to this problem shows the rapid convergence of the sequence constructed by this method to the exact solution. The numerical results show that the approach is easy to implement and accurate when applied to fractional diffusion equations. The method introduces a promising tool for solving many fractional partial differential equations.Keywords: fractional diffusion equations, Caputo fractional derivative, reduced differential transform method, partial
Procedia PDF Downloads 52519006 Efficacy of Corporate Social Responsibility in Corporate Governance Structures of Family Owned Business Groups in India
Authors: Raveena Naz
Abstract:
The concept of ‘Corporate Social Responsibility’ (CSR) has often relied on firms thinking beyond their economic interest despite the larger debate of shareholder versus stakeholder interest. India gave legal recognition to CSR in the Companies Act, 2013 which promises better corporate governance. CSR in India is believed to be different for two reasons: the dominance of family business and the history of practice of social responsibility as a form of philanthropy (mainly among the family business). This paper problematises the actual structure of business houses in India and the role of CSR in India. When the law identifies each company as a separate business entity, the economics of institutions emphasizes the ‘business group’ consisting of a plethora of firms as the institutional organization of business. The capital owned or controlled by the family group is spread across the firms through the interholding (interlocked holding) structures. This creates peculiar implications for CSR legislation in India. The legislation sets criteria for individual firms to undertake liability of mandatory CSR if they are above a certain threshold. Within this framework, the largest family firms which are all part of family owned business groups top the CSR expenditure list. The interholding structures, common managers, auditors and series of related party transactions among these firms help the family to run the business as a ‘family business’ even when the shares are issued to the public. This kind of governance structure allows family owned business group to show mandatory compliance of CSR even when they actually spend much less than what is prescribed by law. This aspect of the family firms is not addressed by the CSR legislation in particular or corporate governance legislation in general in India. The paper illustrates this with an empirical study of one of the largest family owned business group in India which is well acclaimed for its CSR activities. The individual companies under the business group are identified, shareholding patterns explored, related party transactions investigated, common managing authorities are identified; and assets, liabilities and profit/loss accounting practices are analysed. The data has been mainly collected from mandatory disclosures in the annual reports and financial statements of the companies within the business group accessed from the official website of the ultimate controlling authority. The paper demonstrates how the business group through these series of shareholding network reduces its legally mandated CSR liability. The paper thus indicates the inadequacy of CSR legislation in India because the unit of compliance is an individual firm and it assumes that each firm is independent and only connected to each other through market dealings. The law does not recognize the inter-connections of firms in corporate governance structures of family owned business group and hence is inadequate in its design to effect the threshold level of CSR expenditure. This is the central argument of the paper.Keywords: business group, corporate governance, corporate social responsibility, family firm
Procedia PDF Downloads 28019005 Path Planning for Collision Detection between two Polyhedra
Authors: M. Khouil, N. Saber, M. Mestari
Abstract:
This study aimed to propose, a different architecture of a Path Planning using the NECMOP. where several nonlinear objective functions must be optimized in a conflicting situation. The ability to detect and avoid collision is very important for mobile intelligent machines. However, many artificial vision systems are not yet able to quickly and cheaply extract the wealth information. This network, which has been particularly reviewed, has enabled us to solve with a new approach the problem of collision detection between two convex polyhedra in a fixed time (O (1) time). We used two types of neurons linear and threshold logic, which simplified the actual implementation of all the networks proposed. This article represents a comprehensive algorithm that determine through the AMAXNET network a measure (a mini-maximum point) in a fixed time, which allows us to detect the presence of a potential collision.Keywords: path planning, collision detection, convex polyhedron, neural network
Procedia PDF Downloads 43819004 Reductive Control in the Management of Redundant Actuation
Authors: Mkhinini Maher, Knani Jilani
Abstract:
We present in this work the performances of a mobile omnidirectional robot through evaluating its management of the redundancy of actuation. Thus we come to the predictive control implemented. The distribution of the wringer on the robot actions, through the inverse pseudo of Moore-Penrose, corresponds to a -geometric- distribution of efforts. We will show that the load on vehicle wheels would not be equi-distributed in terms of wheels configuration and of robot movement. Thus, the threshold of sliding is not the same for the three wheels of the vehicle. We suggest exploiting the redundancy of actuation to reduce the risk of wheels sliding and to ameliorate, thereby, its accuracy of displacement. This kind of approach was the subject of study for the legged robots.Keywords: mobile robot, actuation, redundancy, omnidirectional, inverse pseudo moore-penrose, reductive control
Procedia PDF Downloads 51119003 The Construction of Exact Solutions for the Nonlinear Lattice Equation via Coth and Csch Functions Method
Authors: A. Zerarka, W. Djoudi
Abstract:
The method developed in this work uses a generalised coth and csch funtions method to construct new exact travelling solutions to the nonlinear lattice equation. The technique of the homogeneous balance method is used to handle the appropriated solutions.Keywords: coth functions, csch functions, nonlinear partial differential equation, travelling wave solutions
Procedia PDF Downloads 66319002 Block Matching Based Stereo Correspondence for Depth Calculation
Authors: G. Balakrishnan
Abstract:
Stereo Correspondence plays a major role in estimation of distance of an object from the stereo camera pair for various applications. In this paper, a stereo correspondence algorithm based on block-matching technique is presented. Initially, an energy matrix is calculated for every disparity obtained using modified Sum of Absolute Difference (SAD). Higher energy matrix errors are removed by using threshold value in order to reduce the mismatch errors. A smoothening filter is applied to eliminate unreliable disparity estimate across the object boundaries. The purpose is to improve the reliability of calculation of disparity map. The experimental results obtained shows that the final depth map produce better results and can be used to all the applications using stereo cameras.Keywords: stereo matching, filters, energy matrix, disparity
Procedia PDF Downloads 21519001 Learning outside the Box by Using Memory Techniques Skill: Case Study in Indonesia Memory Sports Council
Authors: Muhammad Fajar Suardi, Fathimatufzzahra, Dela Isnaini Sendra
Abstract:
Learning is an activity that has been used to do, especially for a student or academics. But a handful of people have not been using and maximizing their brains work and some also do not know a good brain work time in capturing the lessons, so that knowledge is absorbed is also less than the maximum. Indonesia Memory Sports Council (IMSC) is an institution which is engaged in the performance of the brain and the development of effective learning methods by using several techniques that can be used in considering the lessons and knowledge to grasp well, including: loci method, substitution method, and chain method. This study aims to determine the techniques and benefits of using the method given in learning and memorization by applying memory techniques taught by Indonesia Memory Sports Council (IMSC) to students and the difference if not using this method. This research uses quantitative research with survey method addressed to students of Indonesian Memory Sports Council (IMSC). The results of this study indicate that learn, understand and remember the lesson using the techniques of memory which is taught in Indonesia Memory Sport Council is very effective and faster to absorb the lesson than learning without using the techniques of memory, and this affects the academic achievement of students in each educational institution.Keywords: chain method, Indonesia memory sports council, loci method, substitution method
Procedia PDF Downloads 29019000 A Finite Element Method Simulation for Rocket Motor Material Selection
Authors: T. Kritsana, P. Sawitri, P. Teeratas
Abstract:
This article aims to study the effect of pressure on rocket motor case by Finite Element Method simulation to select optimal material in rocket motor manufacturing process. In this study, cylindrical tubes with outside diameter of 122 mm and thickness of 3 mm are used for simulation. Defined rocket motor case materials are AISI4130, AISI1026, AISI1045, AL2024 and AL7075. Internal pressure used for the simulation is 22 MPa. The result from Finite Element Method shows that at a pressure of 22 MPa rocket motor case produced by AISI4130, AISI1045 and AL7075 can be used. A comparison of the result between AISI4130, AISI1045 and AL7075 shows that AISI4130 has minimum principal stress and confirm the results of Finite Element Method by the used of calculation method found that, the results from Finite Element Method has good reliability.Keywords: rocket motor case, finite element method, principal stress, simulation
Procedia PDF Downloads 44918999 Anterior Chamber Depth Measured with Orbscan and Pentacam Compared with Smith Method in 102 Phakic Eyes
Authors: Mohammad Ghandehari Motlagh
Abstract:
Purpose: Comparing anterior chamber depth (ACD) measured with Orbscan II and Pentacam HR compared with the Smith method results. Methods: Smith method (1979) is a reliable method of measuring ACD only with help of slit lamp. In this study 102 phakic eyes as PRK candidates were imaged with both OrbScan and Pentacam and finally ACD was measured thru Smith method with slit lamp. ACD measured with Smith method was presumed as the gold standard and was compared with ACD of the 2 imaging devices. Contraindication cases for PRK and pseudophakic eyes have been excluded from the study. Results: Mean age of the patients was 35.2 ±14.8 yrs/old including 56 M(54.9%)and 46 F(45.09%).Acceptable correlation of ACD measured thru Smith method with Orbscan and Pentacam are R=0.958 and R=0.942 respectively and so Orbscan results can be used in procedures relying on ACD. Conclusion: ACDs measured with OrbScan is more precise than Pentacam and so can be more useful in some surgery procedures relying ACD results such as phakic IOLs and in cycloplegia contraindications.Keywords: orbscan, pentacam, anterior chamber depth, slit lamp
Procedia PDF Downloads 36818998 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 11918997 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 4018996 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 9918995 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 24818994 Finite Element and Split Bregman Methods for Solving a Family of Optimal Control Problem with Partial Differential Equation Constraint
Authors: Mahmoud Lot
Abstract:
In this article, we will discuss the solution of elliptic optimal control problem. First, by using the nite element method, we obtain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained, and hence it saves time and memory requirement. Then we use the split Bregman method for solving this problem, and examples show the speed and accuracy of split Bregman methods for solving these types of problems. We also use the SQP method for solving the examples and compare with the split Bregman method.Keywords: Split Bregman Method, optimal control with elliptic partial differential equation constraint, finite element method
Procedia PDF Downloads 15218993 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance
Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass
Abstract:
For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance
Procedia PDF Downloads 34918992 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations
Authors: M. Y. Waziri, M. A. Aliyu
Abstract:
The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate
Procedia PDF Downloads 63818991 MP-SMC-I Method for Slip Suppression of Electric Vehicles under Braking
Authors: Tohru Kawabe
Abstract:
In this paper, a new SMC (Sliding Mode Control) method with MP (Model Predictive Control) integral action for the slip suppression of EV (Electric Vehicle) under braking is proposed. The proposed method introduce the integral term with standard SMC gain , where the integral gain is optimized for each control period by the MPC algorithms. The aim of this method is to improve the safety and the stability of EVs under braking by controlling the wheel slip ratio. There also include numerical simulation results to demonstrate the effectiveness of the method.Keywords: sliding mode control, model predictive control, integral action, electric vehicle, slip suppression
Procedia PDF Downloads 561