Search results for: nonlinear time history analysis.
11950 Analysis of Medical Data using Data Mining and Formal Concept Analysis
Authors: Anamika Gupta, Naveen Kumar, Vasudha Bhatnagar
Abstract:
This paper focuses on analyzing medical diagnostic data using classification rules in data mining and context reduction in formal concept analysis. It helps in finding redundancies among the various medical examination tests used in diagnosis of a disease. Classification rules have been derived from positive and negative association rules using the Concept lattice structure of the Formal Concept Analysis. Context reduction technique given in Formal Concept Analysis along with classification rules has been used to find redundancies among the various medical examination tests. Also it finds out whether expensive medical tests can be replaced by some cheaper tests.
Keywords: Data Mining, Formal Concept Analysis, Medical Data, Negative Classification Rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174511949 Processing the Medical Sensors Signals Using Fuzzy Inference System
Authors: S. Bouharati, I. Bouharati, C. Benzidane, F. Alleg, M. Belmahdi
Abstract:
Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.Keywords: Sensors, Sensivity, fuzzy logic, analysis, physiological parameters, medical diagnosis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197311948 Numerical Investigations on Group Piles’ Lateral Bearing Capacity Considering Interaction of Soil and Structure
Authors: Mahdi Sadeghian, Mahmoud Hassanlourad, Alireza Ardakani, Reza Dinarvand
Abstract:
In this research, the behavior of monopiles, under lateral loads, was investigated with vertical and oblique piles by Finite Element Method. In engineering practice when soil-pile interaction comes to the picture some simplifications are applied to reduce the design time. As a simplified replacement of soil and pile interaction analysis, pile could be replaced by a column. The height of the column would be equal to the free length of the pile plus a portion of the embedded length of it. One of the important factors studied in this study was that columns with an equivalent length (free length plus a part of buried depth) could be used instead of soil and pile modeling. The results of the analysis show that the more internal friction angle of the soil increases, the more the bearing capacity of the soil is achieved. This additional length is 6 to 11 times of the pile diameter in dense soil although in loose sandy soil this range might increase.
Keywords: Lateral bearing capacity, pile group, oblique pile, soil-structure interaction, depth of fixity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 103811947 Analyzing Current Transformers Saturation Characteristics for Different Connected Burden Using LabVIEW Data Acquisition Tool
Authors: D. Subedi, S. Pradhan
Abstract:
Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However, when the power system experiences an abnormal situation leading to huge current flow, then this huge current is proportionally injected to the protection and metering circuit. Since the protection and metering equipment’s are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore, during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and on the reliability of the protection and metering system. This paper shows the effect of burden on the Accuracy Limiting factor/ Instrument security factor of current transformers and the change in saturation characteristics of the CT’s. The response of the CT to varying levels of overcurrent at different connected burden will be captured using the data acquisition software LabVIEW. Analysis is done on the real time data gathered using LabVIEW. Variation of current transformer saturation characteristics with changes in burden will be discussed.Keywords: Accuracy limiting factor, burden, current transformer, instrument security factor, saturation characteristics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 358711946 The Relation between Social Capital and Trust with Social Network Analysis
Authors: Safak Baykal
Abstract:
The purpose of this study is analyzing the relationship between trust and social capital of people with using Social Network Analysis. In this study, two aspects of social capital will be focused: Bonding, homophilous social capital (BoSC), and Bridging, heterophilous social capital (BrSC). These two aspects diverge each other regarding to the social theories. The other concept of the study is Trust (Tr), namely interpersonal trust, willing to ascribe good intentions to and have confidence in the words and actions of other people. In this study, the sample group, 61 people, was selected from a private firm from the defense industry. The relation between BoSC/BrSC and Tr is shown by using Social Network Analysis (SNA) and statistical analysis with Likert type-questionnaire. The results of the analysis show the Cronbach’s alpha value is 0.756 and social capital values (BoSC/BrSC) is not correlated with Tr values of the people.Keywords: Social capital, interpersonal trust, social network analysis (SNA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259711945 Closed Form Optimal Solution of a Tuned Liquid Column Damper Responding to Earthquake
Authors: A. Farshidianfar, P. Oliazadeh
Abstract:
In this paper the vibration behaviors of a structure equipped with a tuned liquid column damper (TLCD) under a harmonic type of earthquake loading are studied. However, due to inherent nonlinear liquid damping, it is no doubt that a great deal of computational effort is required to search the optimum parameters of the TLCD, numerically. Therefore by linearization the equation of motion of the single degree of freedom structure equipped with the TLCD, the closed form solutions of the TLCD-structure system are derived. To find the reliability of the analytical method, the results have been compared with other researcher and have good agreement. Further, the effects of optimal design parameters such as length ratio and mass ratio on the performance of the TLCD for controlling the responses of a structure are investigated by using the harmonic type of earthquake excitation. Finally, the Citicorp Center which has a very flexible structure is used as an example to illustrate the design procedure for the TLCD under the earthquake excitation.
Keywords: Closed form solution, Earthquake excitation, TLCD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203911944 Left Ventricular Model Using Second Order Electromechanical Coupling: Effects of Viscoelastic Damping
Authors: Elie H. Karam, Antoine B. Abche
Abstract:
It is known that the heart interacts with and adapts to its venous and arterial loading conditions. Various experimental studies and modeling approaches have been developed to investigate the underlying mechanisms. This paper presents a model of the left ventricle derived based on nonlinear stress-length myocardial characteristics integrated over truncated ellipsoidal geometry, and second-order dynamic mechanism for the excitation-contraction coupling system. The results of the model presented here describe the effects of the viscoelastic damping element of the electromechanical coupling system on the hemodynamic response. Different heart rates are considered to study the pacing effects on the performance of the left-ventricle against constant preload and afterload conditions under various damping conditions. The results indicate that the pacing process of the left ventricle has to take into account, among other things, the viscoelastic damping conditions of the myofilament excitation-contraction process.Keywords: Myocardial sarcomere, cardiac pump, excitationcontraction coupling, viscoelasicity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143411943 A Novel Design for Hybrid Space-Time Block Codes and Spatial Multiplexing Scheme
Authors: Seung-Jun Yu, Jang-Kyun Ahn, Eui-Young Lee, Hyoung-Kyu Song
Abstract:
Space-time block codes (STBC) and spatial multiplexing (SM) are promising techniques that effectively exploit multipleinput multiple-output (MIMO) transmission to achieve more reliable communication and a higher multiplexing rate, respectively. In this paper, we study a practical design for hybrid scheme with multi-input multi-output orthogonal frequency division multiplexing (MIMOOFDM) systems to flexibly maximize the tradeoff between diversity and multiplexing gains. Unlike the existing STBC and SM designs which are suitable for the integer multiplexing rate, the proposed design can achieve arbitrary number of multiplexing rate.Keywords: Space-Time Block Codes, Spatial Multiplexing, MIMO-OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181911942 Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.Keywords: Return level, Generalized extreme value (GEV), Meteorology, Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211611941 Data Gathering and Analysis for Arabic Historical Documents
Authors: Ali Dulla
Abstract:
This paper introduces a new dataset (and the methodology used to generate it) based on a wide range of historical Arabic documents containing clean data simple and homogeneous-page layouts. The experiments are implemented on printed and handwritten documents obtained respectively from some important libraries such as Qatar Digital Library, the British Library and the Library of Congress. We have gathered and commented on 150 archival document images from different locations and time periods. It is based on different documents from the 17th-19th century. The dataset comprises differing page layouts and degradations that challenge text line segmentation methods. Ground truth is produced using the Aletheia tool by PRImA and stored in an XML representation, in the PAGE (Page Analysis and Ground truth Elements) format. The dataset presented will be easily available to researchers world-wide for research into the obstacles facing various historical Arabic documents such as geometric correction of historical Arabic documents.Keywords: Dataset production, ground truth production, historical documents, arbitrary warping, geometric correction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87311940 Addressing Scheme for IOT Network Using IPV6
Authors: H. Zormati, J. Chebil, J. Bel Hadj Taher
Abstract:
The goal of this paper is to present an addressing scheme that allows for assigning a unique IPv6 address to each node in the Internet of Things (IoT) network. This scheme guarantees uniqueness by extracting the clock skew of each communication device and converting it into an IPv6 address. Simulation analysis confirms that the presented scheme provides reductions in terms of energy consumption, communication overhead and response time as compared to four studied addressing schemes Strong DAD, LEADS, SIPA and CLOSA.
Keywords: Addressing, IoT, IPv6, network, nodes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97211939 Self-Organization-Based Approach for Embedded Real-Time System Design
Authors: S. S. Bendib, L. W. Mouss, S. Kalla
Abstract:
This paper proposes a self-organization-based approach for real-time systems design. The addressed issue is the mapping of an application onto an architecture of heterogeneous processors while optimizing both makespan and reliability. Since this problem is NP-hard, a heuristic algorithm is used to obtain efficiently approximate solutions. The proposed approach takes into consideration the quality as well as the diversity of solutions. Indeed, an alternate treatment of the two objectives allows to produce solutions of good quality while a self-organization approach based on the neighborhood structure is used to reorganize solutions and consequently to enhance their diversity. Produced solutions make different compromises between the makespan and the reliability giving the user the possibility to select the solution suited to his (her) needs.Keywords: Embedded real-time systems design, makespan, reliability, self-organization, compromises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46911938 Blast Induced Ground Shock Effects on Pile Foundations
Authors: L. B. Jayasinghe, D. P. Thambiratnam, N. Perera, J. H. A. R. Jayasooriya
Abstract:
Due to increased number of terrorist attacks in recent years, loads induced by explosions need to be incorporated in building designs. For safer performance of a structure, its foundation should have sufficient strength and stability. Therefore, prior to any reconstruction or rehabilitation of a building subjected to blast, it is important to examine adverse effects on the foundation caused by blast induced ground shocks. This paper evaluates the effects of a buried explosion on a pile foundation. It treats the dynamic response of the pile in saturated sand, using explicit dynamic nonlinear finite element software LS-DYNA. The blast induced wave propagation in the soil and the horizontal deformation of pile are presented and the results are discussed. Further, a parametric study is carried out to evaluate the effect of varying the explosive shape on the pile response. This information can be used to evaluate the vulnerability of piled foundations to credible blast events as well as develop guidance for their design.
Keywords: Underground explosion, numerical simulation, pilefoundation, saturated soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366011937 Performance Evaluation of an Amperometric Biosensor using a Simple Microcontroller based Data Acquisition System
Authors: V. G. Sangam, Balasaheb M. Patre
Abstract:
In this paper we have proposed a methodology to develop an amperometric biosensor for the analysis of glucose concentration using a simple microcontroller based data acquisition system. The work involves the development of Detachable Membrane Unit (enzyme based biomembrane) with immobilized glucose oxidase on the membrane and interfacing the same to the signal conditioning system. The current generated by the biosensor for different glucose concentrations was signal conditioned, then acquired and computed by a simple AT89C51-microcontroller. The optimum operating parameters for the better performance were found and reported. The detailed performance evaluation of the biosensor has been carried out. The proposed microcontroller based biosensor system has the sensitivity of 0.04V/g/dl, with a resolution of 50mg/dl. It has exhibited very good inter day stability observed up to 30 days. Comparing to the reference method such as HPLC, the accuracy of the proposed biosensor system is well within ± 1.5%. The system can be used for real time analysis of glucose concentration in the field such as, food and fermentation and clinical (In-Vitro) applications.Keywords: Biosensor, DMU, Glucose oxidase andMicrocontroller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178511936 Applications of High Intensity Ultrasound to Modify Millet Protein Concentrate Functionality
Authors: B. Nazari, M. A. Mohammadifar, S. Shojaee-Aliabadi, L. Mirmoghtadaie
Abstract:
Millets as a new source of plant protein were not used in food applications due to its poor functional properties. In this study, the effect of high intensity ultrasound (frequency: 20 kHz, with contentious flow) (US) in 100% amplitude for varying times (5, 12.5, and 20 min) on solubility, emulsifying activity index (EAI), emulsion stability (ES), foaming capacity (FC), and foaming stability (FS) of millet protein concentrate (MPC) were evaluated. In addition, the structural properties of best treatments such as molecular weight and surface charge were compared with the control sample to prove the US effect. The US treatments significantly (P<0.05) increased the solubility of the native MPC (65.8±0.6%) at all sonicated times with the maximum solubility that is recorded at 12.5 min treatment (96.9±0.82 %). The FC of MPC was also significantly affected by the US treatment. Increase in sonicated time up to 12.5 min significantly increased the FC of native MPC (271.03±4.51 ml), but higher increase reduced it significantly. Minimal improvements were observed in the FS of all sonicated MPC compared to the native MPC. Sonicated time for 12.5 min affected the EAI and ES of the native MPC more markedly than 5 and 20 min that may be attributed to higher increase in proteins tendency to adsorption at the oil and water interfaces after the US treatment at this time. SDS-PAGE analysis showed changes in the molecular weight of MPC that attributed to shearing forces created by cavitation phenomenon. Also, this phenomenon caused an increase in the exposure of more amino acids with negative charge in the surface of US treated MPC, that was demonstrated by Zetasizer data. High intensity ultrasound, as a green technology, can significantly increase the functional properties of MPC and can make this usable for food applications.Keywords: Millet protein concentrate, Functional properties, Structural properties, High intensity ultrasound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174311935 Drying of Papaya (Carica papaya L.) Using a Microwave-vacuum Dryer
Authors: Kraipat Cheenkachorn, Piyawat Jintanatham, Sarun Rattanaprapa
Abstract:
In present work, drying characteristics of fresh papaya (Carica papaya L.) was studied to understand the dehydration process and its behavior. Drying experiments were carried out by a laboratory scaled microwave-vacuum oven. The parameters affecting drying characteristics including operating modes (continuous, pulsed), microwave power (400 and 800 W), and vacuum pressure (20, 30, and 40 cmHg) were investigated. For pulsed mode, two levels of power-off time (60 and 120 s) were used while the power-on time was fixed at 60 s and the vacuum pressure was fixed at 40 cmHg. For both operating modes, the effects of drying conditions on drying time, drying rate, and effective diffusivity were investigated. The results showed high microwave power, high vacuum, and pulsed mode of 60 s-on/60 s-off favored drying rate as shown by the shorten drying time and increased effective diffusivity. The drying characteristics were then described by Page-s model, which showed a good agreement with experimental data.
Keywords: papaya, microwave-vacuum drying, effective diffusivity, Page's model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 287111934 Methane versus Carbon Dioxide: Mitigation Prospects
Authors: Alexander J. Severinsky, Allen L. Sessoms
Abstract:
Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm. The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.
Keywords: Methane Leakages, Methane Radiative Forcing, Methane Mitigation, Methane Net Zero.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65411933 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System That Includes Servers with Various Capacities
Authors: Yoshiaki Shikata, Nobutane Hanayama
Abstract:
We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.
Keywords: Processor sharing, multi-server, various capacity, N priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104011932 Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network
Authors: Li Kewen, Su Zhaoxin, Wang Xingmou, Zhu Jian Bing
Abstract:
Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.
Keywords: Convolutional neural network, lithology, prediction of reservoir lithology, seismic attributes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66411931 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51911930 Improving Image Segmentation Performance via Edge Preserving Regularization
Authors: Ying-jie Zhang, Li-ling Ge
Abstract:
This paper presents an improved image segmentation model with edge preserving regularization based on the piecewise-smooth Mumford-Shah functional. A level set formulation is considered for the Mumford-Shah functional minimization in segmentation, and the corresponding partial difference equations are solved by the backward Euler discretization. Aiming at encouraging edge preserving regularization, a new edge indicator function is introduced at level set frame. In which all the grid points which is used to locate the level set curve are considered to avoid blurring the edges and a nonlinear smooth constraint function as regularization term is applied to smooth the image in the isophote direction instead of the gradient direction. In implementation, some strategies such as a new scheme for extension of u+ and u- computation of the grid points and speedup of the convergence are studied to improve the efficacy of the algorithm. The resulting algorithm has been implemented and compared with the previous methods, and has been proved efficiently by several cases.Keywords: Energy minimization, image segmentation, level sets, edge regularization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150711929 Development of an ArcGIS Toolbar for Trend Analysis of Climatic Data
Authors: Arnab Bandyopadhyay, Anubhab Pal, Subhajit Debnath
Abstract:
Climate change is a cumulative change in weather patterns over a period of time. Trend analysis using non-parametric Mann-Kendall test may help to determine the existence and magnitude of any statistically significant trend in the climatic data. Another index called Sen slope may be used to quantify the magnitude of such trends. A toolbar extension to ESRI ArcGIS named Arc Trends has been developed in this study for performing the above mentioned tasks. To study the temporal trend of meteorological parameters, 32 years (1971-2002) monthly meteorological data were collected for 133 selected stations over different agro-ecological regions of India. Both the maximum and minimum temperatures were found to be rising. A significant increasing trend in the relative humidity and a consistent significant decreasing trend in the wind speed all over the country were found. However, a general increase in rainfall was not found in recent years.Keywords: Temporal trend, climate change, ArcGIS, Mann- Kendall test, Sen slope
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 309611928 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy
Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie
Abstract:
In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.
Keywords: Data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 256511927 A Novel Digital Calibration Technique for Gain and Offset Mismatch in TIΣΔ ADCs
Authors: Ali Beydoun, Van-Tam Nguyen, Patrick Loumeau
Abstract:
Time interleaved sigma-delta (TIΣΔ) architecture is a potential candidate for high bandwidth analog to digital converters (ADC) which remains a bottleneck for software and cognitive radio receivers. However, the performance of the TIΣΔ architecture is limited by the unavoidable gain and offset mismatches resulting from the manufacturing process. This paper presents a novel digital calibration method to compensate the gain and offset mismatch effect. The proposed method takes advantage of the reconstruction digital signal processing on each channel and requires only few logic components for implementation. The run time calibration is estimated to 10 and 15 clock cycles for offset cancellation and gain mismatch calibration respectively.Keywords: sigma-delta, calibration, gain and offset mismatches, analog-to-digital conversion, time-interleaving.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 553411926 Technological Analysis Questionnaire for Preliminary Feasibility Study on R&D Program
Authors: Seongmin Yim
Abstract:
The Korean government has applied preliminary feasibility study for a new R&D program over about $50 Million since 2008 as a part of official process in budget planning. The investigations of technology, policy, and economics are carried out separately to arrive at a definite result: whether a program is feasible or unfeasible. This paper describes the concept and check-points related to technological analysis from a preliminary evaluation’s stand-point. First of all, the fundamental concept of technological analysis in evaluation systems such as Program Assessment Rating Tool (PART) by Office of Management and Budget (OMB) and Evaluation Methods by Department of Energy (DOE) in the United States, the Green Book in the United Kingdom are reviewed. After the review, customized questionnaire for technological analysis are developed. Conclusively, limitations and further research directions are provided.
Keywords: Preliminary Feasibility Study, R&D Program, Evaluation System, Technological analysis, R&D Logic Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202811925 Frequency-Energy Characteristics of Local Earthquakes using Discrete Wavelet Transform(DWT)
Authors: O. H. Colak, T. C. Destici, S. Ozen, H. Arman, O. Cerezci
Abstract:
The wavelet transform is one of the most important method used in signal processing. In this study, we have introduced frequency-energy characteristics of local earthquakes using discrete wavelet transform. Frequency-energy characteristic was analyzed depend on difference between P and S wave arrival time and noise within records. We have found that local earthquakes have similar characteristics. If frequency-energy characteristics can be found accurately, this gives us a hint to calculate P and S wave arrival time. It can be seen that wavelet transform provides successful approximation for this. In this study, 100 earthquakes with 500 records were analyzed approximately.Keywords: Discrete Wavelet Transform, Frequency-EnergyCharacteristics, P and S waves arrival time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 227611924 Classification and Analysis of Risks in Software Engineering
Authors: Hooman Hoodat, Hassan Rashidi
Abstract:
Despite various methods that exist in software risk management, software projects have a high rate of failure. When complexity and size of the projects are increased, managing software development becomes more difficult. In these projects the need for more analysis and risk assessment is vital. In this paper, a classification for software risks is specified. Then relations between these risks using risk tree structure are presented. Analysis and assessment of these risks are done using probabilistic calculations. This analysis helps qualitative and quantitative assessment of risk of failure. Moreover it can help software risk management process. This classification and risk tree structure can apply to some software tools.
Keywords: Risk analysis, risk assessment, risk classification, risk tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904111923 Prediction of the Thermal Parameters of a High-Temperature Metallurgical Reactor Using Inverse Heat Transfer
Authors: Mohamed Hafid, Marcel Lacroix
Abstract:
This study presents an inverse analysis for predicting the thermal conductivities and the heat flux of a high-temperature metallurgical reactor simultaneously. Once these thermal parameters are predicted, the time-varying thickness of the protective phase-change bank that covers the inside surface of the brick walls of a metallurgical reactor can be calculated. The enthalpy method is used to solve the melting/solidification process of the protective bank. The inverse model rests on the Levenberg-Marquardt Method (LMM) combined with the Broyden method (BM). A statistical analysis for the thermal parameter estimation is carried out. The effect of the position of the temperature sensors, total number of measurements and measurement noise on the accuracy of inverse predictions is investigated. Recommendations are made concerning the location of temperature sensors.
Keywords: Inverse heat transfer, phase change, metallurgical reactor, Levenberg–Marquardt method, Broyden method, bank thickness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169911922 A Force-directed Graph Drawing based on the Hierarchical Individual Timestep Method
Authors: T. Matsubayashi, T. Yamada
Abstract:
In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.Keywords: visualization, graph drawing, Internet Map
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186011921 A New Approach for Network Reconfiguration Problem in Order to Deviation Bus Voltage Minimization with Regard to Probabilistic Load Model and DGs
Authors: Mahmood Reza Shakarami, Reza Sedaghati
Abstract:
Recently, distributed generation technologies have received much attention for the potential energy savings and reliability assurances that might be achieved as a result of their widespread adoption. The distribution feeder reconfiguration (DFR) is one of the most important control schemes in the distribution networks, which can be affected by DGs. This paper presents a new approach to DFR at the distribution networks considering wind turbines. The main objective of the DFR is to minimize the deviation of the bus voltage. Since the DFR is a nonlinear optimization problem, we apply the Adaptive Modified Firefly Optimization (AMFO) approach to solve it. As a result of the conflicting behavior of the single- objective function, a fuzzy based clustering technique is employed to reach the set of optimal solutions called Pareto solutions. The approach is tested on the IEEE 32-bus standard test system.
Keywords: Adaptive Modified Firefly Optimization (AMFO), Pareto solutions, feeder reconfiguration, wind turbines, bus voltage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022