Search results for: statistical methods
4163 New Newton's Method with Third-order Convergence for Solving Nonlinear Equations
Authors: Osama Yusuf Ababneh
Abstract:
For the last years, the variants of the Newton-s method with cubic convergence have become popular iterative methods to find approximate solutions to the roots of non-linear equations. These methods both enjoy cubic convergence at simple roots and do not require the evaluation of second order derivatives. In this paper, we present a new Newton-s method based on contra harmonic mean with cubically convergent. Numerical examples show that the new method can compete with the classical Newton's method.
Keywords: Third-order convergence, non-linear equations, root finding, iterative method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29644162 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis
Authors: Holger Keitel
Abstract:
The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.
Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15384161 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17024160 Using Structural Equation Modeling in Causal Relationship Design for Balanced-Scorecards' Strategic Map
Authors: A. Saghaei, R. Ghasemi
Abstract:
Through 1980s, management accounting researchers described the increasing irrelevance of traditional control and performance measurement systems. The Balanced Scorecard (BSC) is a critical business tool for a lot of organizations. It is a performance measurement system which translates mission and strategy into objectives. Strategy map approach is a development variant of BSC in which some necessary causal relations must be established. To recognize these relations, experts usually use experience. It is also possible to utilize regression for the same purpose. Structural Equation Modeling (SEM), which is one of the most powerful methods of multivariate data analysis, obtains more appropriate results than traditional methods such as regression. In the present paper, we propose SEM for the first time to identify the relations between objectives in the strategy map, and a test to measure the importance of relations. In SEM, factor analysis and test of hypotheses are done in the same analysis. SEM is known to be better than other techniques at supporting analysis and reporting. Our approach provides a framework which permits the experts to design the strategy map by applying a comprehensive and scientific method together with their experience. Therefore this scheme is a more reliable method in comparison with the previously established methods.Keywords: BSC, SEM, Strategy map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27054159 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674158 Clinical Parameters Response to Low-Level Laser versus Monochromatic Near-Infrared Photo Energy in Diabetic Patients with Peripheral Neuropathy
Authors: Abeer A. Abdelhamed
Abstract:
Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common microvascular complications of type 2 diabetes. Loss of sensation is thought to contribute to a lack of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low-level laser (LLL) and monochromatic near-infrared photo energy (MIRE) on pain, cutaneous sensation, static stability, and index of lower limb blood flow in diabetic patients with peripheral neuropathy. Methods: Forty diabetic patients with peripheral neuropathy were recruited for participation in this study. They were divided into two groups: The MIRE group, which contained 20 patients, and the LLL group, which contained 20 patients. All patients who participated in the study had been subjected to various physical assessment procedures, including pain, cutaneous sensation, Doppler flow meter, and static stability assessments. The baseline measurements were followed by treatment sessions that were conducted twice a week for six successive weeks. Results: The statistical analysis of the data revealed significant improvement of pain in both groups, with significant improvement in cutaneous sensation and static balance in the MIRE group compared to the LLL group; on the other hand, the results showed no significant differences in lower limb blood flow between the groups. Conclusion: LLL and MIRE can improve painful symptoms in patients with diabetic neuropathy. On the other hand, MIRE is also useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.Keywords: Diabetic neuropathy, Doppler flow meter, –Lowlevel laser, Monochromatic near-infrared photo energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18874157 Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control
Authors: Bogusław Schreyer
Abstract:
The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.
Keywords: Select-high method, select-low method, torque distribution, wheeled robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4894156 Various Speech Processing Techniques For Speech Compression And Recognition
Authors: Jalal Karam
Abstract:
Years of extensive research in the field of speech processing for compression and recognition in the last five decades, resulted in a severe competition among the various methods and paradigms introduced. In this paper we include the different representations of speech in the time-frequency and time-scale domains for the purpose of compression and recognition. The examination of these representations in a variety of related work is accomplished. In particular, we emphasize methods related to Fourier analysis paradigms and wavelet based ones along with the advantages and disadvantages of both approaches.Keywords: Time-Scale, Wavelets, Time-Frequency, Compression, Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23314155 A Study of Various Numerical Turbulence Modeling Methods in Boundary Layer Excitation of a Square Ribbed Channel
Authors: Hojjat Saberinejad, Adel Hashiehbaf, Ehsan Afrasiabian
Abstract:
Among the various cooling processes in industrial applications such as: electronic devices, heat exchangers, gas turbines, etc. Gas turbine blades cooling is the most challenging one. One of the most common practices is using ribbed wall because of the boundary layer excitation and therefore making the ultimate cooling. Vortex formation between rib and channel wall will result in a complicated behavior of flow regime. At the other hand, selecting the most efficient method for capturing the best results comparing to experimental works would be a fascinating issue. In this paper 4 common methods in turbulence modeling: standard k-e, rationalized k-e with enhanced wall boundary layer treatment, k-w and RSM (Reynolds stress model) are employed to a square ribbed channel to investigate the separation and thermal behavior of the flow in the channel. Finally all results from different methods which are used in this paper will be compared with experimental data available in literature to ensure the numerical method accuracy.Keywords: boundary layer, turbulence, numerical method, rib cooling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16904154 A Retrospective Cross-Sectional Study on the Prevalence and Factors Associated with Virological Non-Suppression among HIV-Positive Adult Patients on Antiretroviral Therapy in Woliso Town, Oromia, Ethiopia
Authors: Teka Haile, Behailu Hawulte, Solomon Alemayehu
Abstract:
Background: HIV virological failure still remains a problem in HV/AIDS treatment and care. This study aimed to describe the prevalence and identify the factors associated with viral non-suppression among HIV-positive adult patients on antiretroviral therapy in Woliso Town, Oromia, Ethiopia. Methods: A retrospective cross-sectional study was conducted among 424 HIV-positive patient’s attending antiretroviral therapy (ART) in Woliso Town during the period from August 25, 2020 to August 30, 2020. Data collected from patient medical records were entered into Epi Info version 2.3.2.1 and exported to SPSS version 21.0 for analysis. Logistic regression analysis was done to identify factors associated with viral load non-suppression, and statistical significance of odds ratios were declared using 95% confidence interval and p-value < 0.05. Results: A total of 424 patients were included in this study. The mean age (± SD) of the study participants was 39.88 (± 9.995) years. The prevalence of HIV viral load non-suppression was 55 (13.0%) with 95% CI (9.9-16.5). Second-line ART treatment regimen (Adjusted Odds Ratio (AOR) = 8.98, 95% Confidence Interval (CI): 2.64, 30.58) and routine viral load testing (AOR = 0.01, 95% CI: 0.001, 0.02) were significantly associated with virological non-suppression. Conclusion: Virological non-suppression was high, which hinders the achievement of the third global 95 target. The second-line regimen and routine viral load testing were significantly associated with virological non-suppression. It suggests the need to assess the effectiveness of antiretroviral drugs for epidemic control. It also clearly shows the need to decentralize third-line ART treatment for those patients in need.Keywords: Virological non-suppression, HIV-positive, ART, Woliso Town, Ethiopia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5894153 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods
Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian
Abstract:
In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32814152 Thiopental-Fentanyl versus Midazolam-Fentanyl for Emergency Department Procedural Sedation and Analgesia in Patients with Shoulder Dislocation and Distal Radial Fracture-Dislocation: A Randomized Double-Blind Controlled Trial
Authors: D. Farsi, Gh. Dokhtvasi, S. Abbasi, S. Shafiee Ardestani, E. Payani
Abstract:
Background and aim: It has not been well studied whether fentanyl-thiopental (FT) is effective and safe for PSA in orthopedic procedures in Emergency Department (ED). The aim of this trial was to evaluate the effectiveness of intravenous FT versus fentanyl-midazolam (FM) in patients who suffered from shoulder dislocation or distal radial fracture-dislocation. Methods: In this randomized double-blinded study, Seventy-six eligible patients were entered the study and randomly received intravenous FT or FM. The success rate, onset of action and recovery time, pain score, physicians’ satisfaction and adverse events were assessed and recorded by treating emergency physicians. The statistical analysis was intention to treat. Results: The success rate after administrating loading dose in FT group was significantly higher than FM group (71.7% vs. 48.9%, p=0.04); however, the ultimate unsuccessful rate after 3 doses of drugs in the FT group was higher than the FM group (3 to 1) but it did not reach to significant level (p=0.61). Despite near equal onset of action time in two study group (P=0.464), the recovery period in patients receiving FT was markedly shorter than FM group (P<0.001). The occurrence of adverse effects was low in both groups (p=0.31). Conclusion: PSA using FT is effective and appears to be safe for orthopedic procedures in the ED. Therefore, regarding the prompt onset of action, short recovery period of thiopental, it seems that this combination can be considered more for performing PSA in orthopedic procedures in ED.
Keywords: Procedural Sedation and Analgesia, Thiopental, Fentanyl, Midazolam, Orthopedic Procedure, Emergency Department, Pain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21134151 Dynamic Safety-Stock Calculation
Authors: Julian Becker, Wiebke Hartmann, Sebastian Bertsch, Johannes Nywlt, Matthias Schmidt
Abstract:
In order to ensure a high service level industrial enterprises have to maintain safety-stock that directly influences the economic efficiency at the same time. This paper analyses established mathematical methods to calculate safety-stock. Therefore, the performance measured in stock and service level is appraised and the limits of several methods are depicted. Afterwards, a new dynamic approach is presented to gain an extensive method to calculate safety-stock that also takes the knowledge of future volatility into account.
Keywords: Inventory dimensioning, material requirement planning, safety-stock calculation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68774150 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.
Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6994149 Methods of Geodesic Distance in Two-Dimensional Face Recognition
Authors: Rachid Ahdid, Said Safi, Bouzid Manaut
Abstract:
In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.
Keywords: 2D face recognition, Geodesic distance, Iso-Geodesic Curves, Geodesic-Intensity Histogram, facial surface, Neural Networks, K-Nearest Neighbor, Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18154148 Design for Manufacturability and Concurrent Engineering for Product Development
Authors: Alemu Moges Belay
Abstract:
In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Keywords: Design for manufacturability, Concurrent Engineering, Time-to-Market, Product development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55864147 Anthropometric and Physical Fitness Ability Profile of Elite and Non-Elite Boxers of Manipur
Authors: Anthropometric, Physical Fitness Ability Profile of Elite, Non-Elite Boxers of Manipur
Abstract:
Background: Boxing is one of the oldest combat sports where different anthropological and fitness ability parameters determine performance. It is characterized by short duration, high intensity bursts of activity. The purpose of this research was to determine anthropometric and physical fitness profile of male elite and non-elite boxers of Manipur and to compare the two groups. Materials and Methods: Nineteen subjects were selected as elite boxers and twenty-four were non-elite boxers of Manipur. A cross-sectional study was conducted on anthropometric measurements and physical fitness ability tests on 33 subjects (elite and non-elite boxers). Statistical analysis was done using descriptive statistics, t-test and logistic regression with the help of SPSS version 15 software. Results: Results showed elite boxers have significantly reduced neck girth and calf girth as compare to non-elite boxers. Elite boxers have significantly lower sub scapular skin fold (SSF) and supra iliac skin fold (SISF) than their counterparts. Higher stature, larger BTB and lower percent fat are associated with higher performance in boxing. Sit ups (SU), standing Broad Jump (SBJ), Plat taping (PT), Sit and reach (SAR) and Harvard Step Test (HST) are predicted as most contributing factors enhancing performance level among the physical fitness components. Elite boxers are found to have more functional strength (sit ups), higher explosive strength (SBJ), more agility (PT), cardio-vascular endurance and flexibility (SAR) than non-elite boxers. Conclusion: In conclusion, lower fat, higher lean body mass, larger bi-trochantric breadth, high explosive strength, agility and flexibility are significantly associated with higher performance and chance of becoming elite boxers.Keywords: Anthropometry, elite and non-elite boxers, Manipur, physical fitness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16004146 An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development
Authors: Mohammad Rostami Mehr, Seyed Arya Mir Rashed, Arndt Lueder, Magdalena Mißler-Behr
Abstract:
This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.
Keywords: complexity management, new product development, engineering change management, flexibility
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5584145 Design of DC Voltage Control for D-STATCOM
Authors: Kittaya Somsai, Thanatchai Kulworawanichpong, Nitus Voraphonpiput
Abstract:
This paper presents the DC voltage control design of D-STATCOM when the D-STATCOM is used for load voltage regulation. Although, the DC voltage can be controlled by active current of the D-STATCOM, reactive current still affects the DC voltage. To eliminate this effect, the control strategy with elimination effect of the reactive current is proposed and the results of the control with and without the elimination the effect of the reactive current are compared. For obtaining the proportional and integral gains of the PI controllers, the symmetrical optimum and genetic algorithms methods are applied. The stability margin of these methods are obtained and discussed in detail. In addition, the performance of the DC voltage control based on symmetrical optimum and genetic algorithms methods are compared. Effectiveness of the controllers designed was verified through computer simulation performed by using Power System Tool Block (PSB) in SIMULINK/MATLAB. The simulation results demonstrated that the DC voltage control proposed is effective in regulating DC voltage when the DSTATCOM is used for load voltage regulation.
Keywords: D-STATCOM, DC voltage control, Symmetrical optimum, Genetic algorithms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50384144 Enhancing Rural Agricultural Value Chains through Electric Mobility Services in Ethiopia
Authors: Clemens Pizzinini, Philipp Rosner, David Ziegler, Markus Lienkamp
Abstract:
Transportation is a constitutional part of most supply and value chains in modern economies. Smallholder farmers in rural Ethiopia face severe challenges along their supply and value chains. In particular, suitable, affordable, and available transport services are in high demand. To develop context-specific technical solutions, a problem-to-solution methodology based on the interaction with technology is developed. With this approach, we fill the gap between proven transportation assessment frameworks and general user-centered techniques. Central to our approach is an electric test vehicle that is implemented in rural supply and value chains for research, development, and testing. Based on our objective and the derived methodological requirements, a set of existing methods is selected. Local partners are integrated in an organizational framework that executes major parts of this research endeavour in Arsi Zone, Oromia Region, Ethiopia.
Keywords: Agricultural value chain, participatory methods, agile methods, sub-Saharan Africa, Ethiopia, electric vehicle, transport service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604143 Comparison between XGBoost, LightGBM and CatBoost Using a Home Credit Dataset
Authors: Essam Al Daoud
Abstract:
Gradient boosting methods have been proven to be a very important strategy. Many successful machine learning solutions were developed using the XGBoost and its derivatives. The aim of this study is to investigate and compare the efficiency of three gradient methods. Home credit dataset is used in this work which contains 219 features and 356251 records. However, new features are generated and several techniques are used to rank and select the best features. The implementation indicates that the LightGBM is faster and more accurate than CatBoost and XGBoost using variant number of features and records.
Keywords: Gradient boosting, XGBoost, LightGBM, CatBoost, home credit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94694142 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27524141 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Authors: Mert Bal, Hayri Sever, Oya Kalıpsız
Abstract:
Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18144140 Building a Service-Centric Business Model in SMEs in the Business-to-Business Context
Authors: Päivi J. Tossavainen , Leena Alakoski, Katri Ojasalo
Abstract:
Building a service-centric business model requires new knowledge and capabilities in companies. This paper enlightens the challenges small and medium sized firms (SMEs) face when developing their service-centric business models. This paper examines the premise for knowledge transfer and capability development required. The objective of this paper is to increase knowledge about SME-s transformation to service-centric business models.This paper reports an action research based case study. The paper provides empirical evidence from three case companies. The empirical data was collected through multiple methods. The findings of the paper are: First, the developed model to analyze the current state in companies. Second, the process of building the service – centric business models. Third, the selection of suitable service development methods. The lack of a holistic understanding on service logic suggests that SMEs need practical and easy to use methods to improve their businessKeywords: service-centric business model, service development, action research, case study
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17814139 Implementation of ADETRAN Language Using Message Passing Interface
Authors: Akiyoshi Wakatani
Abstract:
This paper describes the Message Passing Interface (MPI) implementation of ADETRAN language, and its evaluation on SX-ACE supercomputers. ADETRAN language includes pdo statement that specifies the data distribution and parallel computations and pass statement that specifies the redistribution of arrays. Two methods for implementation of pass statement are discussed and the performance evaluation using Splitting-Up CG method is presented. The effectiveness of the parallelization is evaluated and the advantage of one dimensional distribution is empirically confirmed by using the results of experiments.Keywords: Iterative methods, array redistribution, translator, distributed memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11994138 Features of Party Construction in the Course of Political Modernization of Kazakhstan
Authors: Zhankuliyeva S. A.
Abstract:
This article considers the main features of party construction in the course of political modernization of Kazakhstan. Along with consideration of party construction author analyzed how the transformation of the party system was fulfilled in Kazakhstan. Besides the basic stages in the course of party construction were explained by the author. The statistical data is cited.Keywords: elections, multi-party system, party construction, political pluralism, political party, Republic of Kazakhstan (RK)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15364137 Effect of Scalping on the Mechanical Behavior of Coarse Soils
Authors: Nadine Ali Hassan, Ngoc Son Nguyen, Didier Marot, Fateh Bendahmane
Abstract:
This paper aims at presenting a study of the effect of scalping methods on the mechanical properties of coarse soils by resorting to numerical simulations based on the discrete element method (DEM) and experimental triaxial tests. Two reconstitution methods are used, designated as scalping method and substitution method. Triaxial compression tests are first simulated on a granular materials with a grap graded particle size distribution by using the DEM. We study the effect of these reconstitution methods on the stress-strain behavior of coarse soils with different fine contents and with different ways to control the densities of the scalped and substituted materials. Experimental triaxial tests are performed on original mixtures of sands and gravels with different fine contents and on their corresponding scalped and substituted samples. Numerical results are qualitatively compared to experimental ones. Agreements and discrepancies between these results are also discussed.Keywords: Coarse soils, scalping, substitution, discrete element method, triaxial test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6524136 Modeling Language for Machine Learning
Authors: Tsuyoshi Okita, Tatsuya Niwa
Abstract:
For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem.Keywords: Formal language, statistical inference problem, reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16154135 Using Automatic Ontology Learning Methods in Human Plausible Reasoning Based Systems
Authors: A. R. Vazifedoost, M. Rahgozar, F. Oroumchian
Abstract:
Knowledge discovery from text and ontology learning are relatively new fields. However their usage is extended in many fields like Information Retrieval (IR) and its related domains. Human Plausible Reasoning based (HPR) IR systems for example need a knowledge base as their underlying system which is currently made by hand. In this paper we propose an architecture based on ontology learning methods to automatically generate the needed HPR knowledge base.Keywords: Ontology Learning, Human Plausible Reasoning, knowledge extraction, knowledge representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16014134 Attribute Selection Methods Comparison for Classification of Diffuse Large B-Cell Lymphoma
Authors: Helyane Bronoski Borges, Júlio Cesar Nievola
Abstract:
The most important subtype of non-Hodgkin-s lymphoma is the Diffuse Large B-Cell Lymphoma. Approximately 40% of the patients suffering from it respond well to therapy, whereas the remainder needs a more aggressive treatment, in order to better their chances of survival. Data Mining techniques have helped to identify the class of the lymphoma in an efficient manner. Despite that, thousands of genes should be processed to obtain the results. This paper presents a comparison of the use of various attribute selection methods aiming to reduce the number of genes to be searched, looking for a more effective procedure as a whole.Keywords: Attribute selection, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417