Search results for: feature selection methods
3925 Genetic Characterization of Barley Genotypes via Inter-Simple Sequence Repeat
Authors: Mustafa Yorgancılar, Emine Atalay, Necdet Akgün, Ali Topal
Abstract:
In this study, polymerase chain reaction based Inter-simple sequence repeat (ISSR) from DNA fingerprinting techniques were used to investigate the genetic relationships among barley crossbreed genotypes in Turkey. It is important that selection based on the genetic base in breeding programs via ISSR, in terms of breeding time. 14 ISSR primers generated a total of 97 bands, of which 81 (83.35%) were polymorphic. The highest total resolution power (RP) value was obtained from the F2 (0.53) and M16 (0.51) primers. According to the ISSR result, the genetic similarity index changed between 0.64–095; Lane 3 with Line 6 genotypes were the closest, while Line 36 were the most distant ones. The ISSR markers were found to be promising for assessing genetic diversity in barley crossbreed genotypes.
Keywords: Barley, crossbreed, genetic similarity, ISSR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9193924 Application Reliability Method for Concrete Dams
Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar
Abstract:
Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.
Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12263923 Determination of Measurement Uncertainty in Extracting of Forming Limit Diagrams
Authors: M. Mahboubkhah, H. Fayazfar
Abstract:
In this research, Forming Limit Diagrams for supertension sheet metals which are using in automobile industry have been obtained. The exerted strains to sheet metals have been measured with four different methods and the errors of each method have also been represented. These methods have been compared with together and the most efficient and economic way of extracting of the exerted strains to sheet metals has been introduced. In this paper total error and uncertainty of FLD extraction procedures have been derived. Determination of the measurement uncertainty in extracting of FLD has a great importance in design and analysis of the sheet metal forming process.Keywords: Forming Limit Diagram, Major and Minor Strain, Measurement Uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20023922 Estimation of Real Power Transfer Allocation Using Intelligent Systems
Authors: H. Shareef, A. Mohamed, S. A. Khalid, Aziah Khamis
Abstract:
This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation.
Keywords: Artificial intelligence, Power tracing, Artificial neural network, ANFIS, Power system deregulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25843921 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves
Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada
Abstract:
Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.Keywords: Renewable energy, oscillating water column, multi-criteria selection, wells turbine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12433920 Optimal Controller with Backstepping and BELBIC for Single-Link Flexible Manipulator
Authors: Ali Reza Sahab, Amir Gholami Pastaki
Abstract:
In this paper, backstepping method (BM) is proposed for a single-link flexible mechanical manipulator. In each step of this method a positive value is obtained. Selections of the gain factor values are very important because controller will have different behavior for each different set of values. Improper selection of these gains can lead to instability of the system. In order to choose proper values for gains BELBIC method has been used in this work. Finally, to prove the efficiency of this method, the obtained results of proposed model are compared with robust controller one. Results show that the combination of backstepping and BELBIC that is presented here, can stabilized the system with higher speed, shorter settling time and lower overshoot in than robust controller.
Keywords: single-link flexible manipulator, backstepping, BELBIC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18753919 Wind Speed Data Analysis using Wavelet Transform
Authors: S. Avdakovic, A. Lukac, A. Nuhanovic, M. Music
Abstract:
Renewable energy systems are becoming a topic of great interest and investment in the world. In recent years wind power generation has experienced a very fast development in the whole world. For planning and successful implementations of good wind power plant projects, wind potential measurements are required. In these projects, of great importance is the effective choice of the micro location for wind potential measurements, installation of the measurement station with the appropriate measuring equipment, its maintenance and analysis of the gained data on wind potential characteristics. In this paper, a wavelet transform has been applied to analyze the wind speed data in the context of insight in the characteristics of the wind and the selection of suitable locations that could be the subject of a wind farm construction. This approach shows that it can be a useful tool in investigation of wind potential.Keywords: Wind potential, Wind speed data, Wavelettransform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26323918 Artificial Intelligence-Based Detection of Individuals Suffering from Vestibular Disorder
Authors: D. Hişam, S. İkizoğlu
Abstract:
Identifying the problem behind balance disorder is one of the most interesting topics in medical literature. This study has considerably enhanced the development of artificial intelligence (AI) algorithms applying multiple machine learning (ML) models to sensory data on gait collected from humans to classify between normal people and those suffering from Vestibular System (VS) problems. Although AI is widely utilized as a diagnostic tool in medicine, AI models have not been used to perform feature extraction and identify VS disorders through training on raw data. In this study, three ML models, the Random Forest Classifier (RF), Extreme Gradient Boosting (XGB), and K-Nearest Neighbor (KNN), have been trained to detect VS disorder, and the performance comparison of the algorithms has been made using accuracy, recall, precision, and f1-score. With an accuracy of 95.28 %, Random Forest (RF) Classifier was the most accurate model.
Keywords: Vestibular disorder, machine learning, random forest classifier, k-nearest neighbor, extreme gradient boosting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693917 The Delaying Influence of Degradation on the Divestment of Gas Turbines for Associated Gas Utilisation: Part 1
Authors: Mafel Obhuo, Dodeye I. Igbong, Duabari S. Aziaka, Pericles Pilidis
Abstract:
An important feature of the exploitation of associated gas as fuel for gas turbine engines is a declining supply. So when exploiting this resource, the divestment of prime movers is very important as the fuel supply diminishes with time. This paper explores the influence of engine degradation on the timing of divestments. Hypothetical but realistic gas turbine engines were modelled with Turbomatch, the Cranfield University gas turbine performance simulation tool. The results were deployed in three degradation scenarios within the TERA (Techno-economic and environmental risk analysis) framework to develop economic models. An optimisation with Genetic Algorithms was carried out to maximize the economic benefit. The results show that degradation will have a significant impact. It will delay the divestment of power plants, while they are running less efficiently. Over a 20 year investment, a decrease of $0.11bn, $0.26bn and $0.45bn (billion US dollars) were observed for the three degradation scenarios as against the clean case.
Keywords: Economic return, flared associated gas, net present value, optimisation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11073916 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods
Authors: Autcha Araveeporn
Abstract:
This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.
Keywords: Bayes method, Markov Chain Monte Carlo method, Maximum Likelihood method, normal distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14353915 The Efficacy of Technology in Enhancing the Development and Learning of Children (0 – 5 Years)
Authors: Adesina, Olusola Joseph
Abstract:
The use of Technological tools in the classroom setting has drawn the interest of researchers all over the world in the recent time. Technology has been identified in the recent time as potentials tools to aid learning especially during early childhood stage. The main objective of this is to assist the upcoming younger generations to acquire necessary skills for cognitive development which later enhances effective teaching learning process. The integration of Technology in early childhood requires a careful selection of devices that will both assist the children and the teachers or care givers. This paper therefore, examines some selected literature evidences and highlighted the efficacy of various technologies tools in enhancing the development and learning of children (0 – 5 years). Conclusion and recommendations were also drawn in this paper.
Keywords: Development, Efficacy, Learning, Technological Device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15243914 Factors Determining Selection of Essential Nutrition Supplements
Authors: Daniel C. S. Lim
Abstract:
There are numerous nutritional supplements, such as multivitamins and nutrition drinks, in the market today. Many of these supplements are expensive and tend to be driven commercially by business decisions and big marketing budgets. Many of the costs are ultimately borne by the end user in the quest for keeping to a healthy lifestyle. This paper proposes a system with a list of ten determinants to gauge how to decide the value of various supplements. It suggests variables such as composition, safety, efficacy and bioavailability, as well as several other considerations. These guidelines can help to tackle many of the issues that people of all ages face in the way that they receive essential nutrients. The system also aims to promote and improve the safety and choice of foods and supplements. In so doing, the system aims to promote the individual’s or population’s control over their own health and reduce the growing health care burden on the society.Keywords: Nutritional supplements, vitamins and minerals, bioavailability, supplementation determinants, nutrition guidelines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11853913 Unsteady Water Boundary Layer Flow with Non-Uniform Mass Transfer
Authors: G. Revathi, P. Saikrishnan
Abstract:
In the present analysis an unsteady laminar forced convection water boundary layer flow is considered. The fluid properties such as viscosity and Prandtl number are taken as variables such that those are inversely proportional to temperature. By using quasi-linearization technique the nonlinear coupled partial differential equations are linearized and the numerical solutions are obtained by using implicit finite difference scheme with the appropriate selection of step sizes. Non-similar solutions have been obtained from the starting point of the stream-wise coordinate to the point where skin friction value vanishes. The effect non-uniform mass transfer along the surface of the cylinder through slot is studied on the skin friction and heat transfer coefficients.Keywords: Boundary layer, heat transfer, non-similar solution, non-uniform mass, unsteady flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19683912 Relation between Roots and Tangent Lines of Function in Fractional Dimensions: A Method for Optimization Problems
Authors: Ali Dorostkar
Abstract:
In this paper, a basic schematic of fractional dimensional optimization problem is presented. As will be shown, a method is performed based on a relation between roots and tangent lines of function in fractional dimensions for an arbitrary initial point. It is shown that for each polynomial function with order N at least N tangent lines must be existed in fractional dimensions of 0 < α < N+1 which pass exactly through the all roots of the proposed function. Geometrical analysis of tangent lines in fractional dimensions is also presented to clarify more intuitively the proposed method. Results show that with an appropriate selection of fractional dimensions, we can directly find the roots. Method is presented for giving a different direction of optimization problems by the use of fractional dimensions.
Keywords: Tangent line, fractional dimension, root, optimization problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5643911 Totally Integrated Smart Energy System through Data Acquisition via Remote Location
Authors: Muhammad Tahir Qadri, M. Irfan Anis, M. Nawaz Irshad Khan
Abstract:
This paper discusses the approach of real-time controlling of the energy management system using the data acquisition tool of LabVIEW. The main idea of this inspiration was to interface the Station (PC) with the system and publish the data on internet using LabVIEW. In this venture, controlling and switching of 3 phase AC loads are effectively and efficiently done. The phases are also sensed through devices. In case of any failure the attached generator starts functioning automatically. The computer sends command to the system and system respond to the request. The modern feature is to access and control the system world-wide using world wide web (internet). This controlling can be done at any time from anywhere to effectively use the energy especially in developing countries where energy management is a big problem. In this system totally integrated devices are used to operate via remote location.Keywords: VI-server, Remote Access, Telemetry, Data Acquisition, web server.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18793910 Real-Time Control of a Two-Wheeled Inverted Pendulum Mobile Robot
Authors: S. W. Nawawi, M. N. Ahmad, J. H. S. Osman
Abstract:
The research on two-wheeled inverted pendulum (TWIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world. This paper describes the hardware design of such a robot. The objective of the design is to develop a TWIP mobile robot as well as MATLAB interfacing configuration to be used as flexible platform comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLAB Real Time Workshop coding, modeling and control scheme will be addressed and discussed. The system is then tested using a wellknown state feedback controller to verify its functionality.
Keywords: Embedded System, Two-wheeled Inverted Pendulum Mobile Robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47713909 Design of CMOS CFOA Based on Pseudo Operational Transconductance Amplifier
Authors: Hassan Jassim Motlak
Abstract:
A novel design technique employing CMOS Current Feedback Operational Amplifier (CFOA) is presented. The feature of consumption very low power in designing pseudo-OTA is used to decreasing the total power consumption of the proposed CFOA. This design approach applies pseudo-OTA as input stage cascaded with buffer stage. Moreover, the DC input offset voltage and harmonic distortion (HD) of the proposed CFOA are very low values compared with the conventional CMOS CFOA due to the symmetrical input stage. P-Spice simulation results are obtained using 0.18μm MIETEC CMOS process parameters and supply voltage of ±1.2V, 50μA biasing current. The p-spice simulation shows excellent improvement of the proposed CFOA over existing CMOS CFOA. Some of these performance parameters, for example, are DC gain of 62. dB, openloop gain bandwidth product of 108 MHz, slew rate (SR+) of +71.2V/μS, THD of -63dB and DC consumption power (PC) of 2mW.
Keywords: Pseudo-OTA used CMOS CFOA, low power CFOA, high-performance CFOA, novel CFOA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28513908 Event Template Generation for News Articles
Authors: A. Kowcika, E. Umamaheswari, T.V. Geetha
Abstract:
In this paper we focus on event extraction from Tamil news article. This system utilizes a scoring scheme for extracting and grouping event-specific sentences. Using this scoring scheme eventspecific clustering is performed for multiple documents. Events are extracted from each document using a scoring scheme based on feature score and condition score. Similarly event specific sentences are clustered from multiple documents using this scoring scheme. The proposed system builds the Event Template based on user specified query. The templates are filled with event specific details like person, location and timeline extracted from the formed clusters. The proposed system applies these methodologies for Tamil news articles that have been enconverted into UNL graphs using a Tamil to UNL-enconverter. The main intention of this work is to generate an event based template.Keywords: Event Extraction, Score based Clustering, Segmentation, Template Generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16993907 Increased Capacity of Information Hiding in LSB-s Method for Text and Image
Authors: H.B.Kekre, Archana Athawale, Pallavi N.Halarnkar
Abstract:
Steganography, derived from Greek, literally means “covered writing". It includes a vast array of secret communications methods that conceal the message-s very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. This paper proposes a new improved version of Least Significant Bit (LSB) method. The approach proposed is simple for implementation when compared to Pixel value Differencing (PVD) method and yet achieves a High embedding capacity and imperceptibility. The proposed method can also be applied to 24 bit color images and achieve embedding capacity much higher than PVD.Keywords: Information Hiding, LSB Matching, PVD Steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31663906 Evaluation of Traditional Methods in Construction and Their Effects on Reinforced-Concrete Buildings Behavior
Authors: E. H. N. Gashti, M. Zarrini, M. Irannezhad, J. R. Langroudi
Abstract:
Using ETABS software, this study analyzed 23 buildings to evaluate effects of mistakes during construction phase on buildings structural behavior. For modelling, two different loadings were assumed: 1) design loading and 2) loading due to the effects of mistakes in construction phase. Research results determined that considering traditional construction methods for buildings resulted in a significant increase in dead loads and consequently intensified the displacements and base-shears of buildings under seismic loads.
Keywords: Reinforced-concrete buildings, Construction mistakes, Base-shear, displacements, Failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26633905 Automated Particle Picking based on Correlation Peak Shape Analysis and Iterative Classification
Authors: Hrabe Thomas, Beck Florian, Nickell Stephan
Abstract:
Cryo-electron microscopy (CEM) in combination with single particle analysis (SPA) is a widely used technique for elucidating structural details of macromolecular assemblies at closeto- atomic resolutions. However, development of automated software for SPA processing is still vital since thousands to millions of individual particle images need to be processed. Here, we present our workflow for automated particle picking. Our approach integrates peak shape analysis to the classical correlation and an iterative approach to separate macromolecules and background by classification. This particle selection workflow furthermore provides a robust means for SPA with little user interaction. Processing simulated and experimental data assesses performance of the presented tools.Keywords: Cryo-electron Microscopy, Single Particle Analysis, Image Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16683904 Parallel Image Compression and Analysis with Wavelets
Authors: M. Kutila, J. Viitanen
Abstract:
This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.
Keywords: image compression, jpeg, wavelet, vlc
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17843903 Adaptive Bidirectional Flow for Image Interpolation and Enhancement
Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang
Abstract:
Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Keywords: anisotropic diffusion, bidirectional flow, directional derivatives, edge enhancement, image interpolation, inverse flow, shock filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15383902 Tonal Pitch Structure as a Tool of Social Consolidation
Authors: Piotr Podlipniak
Abstract:
This paper proposes that in the course of evolution pitch structure became a human specific tool of communication the function of which is to induce emotional states such as uncertainty and cohesion. By the means of eliciting these emotions during collective music performance people are able to unconsciously give cues concerning social acceptance. This is probably one of the reasons why in all cultures people collectively perform tonal music. It is also suggested that tonal pitch structure had been invented socially before it became an evolutionary innovation of hominines. It means that a predisposition to tonally organize pitches evolved by the means of ‘Baldwin effect’ – a process in which natural selection transforms the learned response of an organism into the instinctive response. In the proposed, hypothetical evolutionary scenario of the emergence of tonal pitch structure social forces such as a need for closer cooperation play the crucial role.Keywords: Emotion, evolution, tonality, social consolidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14073901 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies
Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk
Abstract:
Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.
Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1163900 Introduction to Electron Spectroscopy for Surfaces Characterization
Authors: Abdelkader Benzian
Abstract:
Spectroscopy is the study of the spectrum produced by the radiation-matter interaction which requires the study of electromagnetic radiation (or electrons) emitted, absorbed, or scattered by matter. Thus, the spectral analysis is using spectrometers which enables us to obtain curves that express the distribution of the energy emitted (spectrum). Analysis of emission spectra can therefore constitute several methods depending on the range of radiation energy. The most common methods used are Auger electron spectroscopy (AES) and Electron Energy Losses Spectroscopy (EELS), which allow the determination of the atomic structure on the surface. This paper focalized essentially on the Electron Energy Loss Spectroscopy.
Keywords: Dielectric, plasmon, mean free path, spectroscopy of electron energy losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7723899 Initialization Method of Reference Vectors for Improvement of Recognition Accuracy in LVQ
Authors: Yuji Mizuno, Hiroshi Mabuchi
Abstract:
Initial values of reference vectors have significant influence on recognition accuracy in LVQ. There are several existing techniques, such as SOM and k-means, for setting initial values of reference vectors, each of which has provided some positive results. However, those results are not sufficient for the improvement of recognition accuracy. This study proposes an ACO-used method for initializing reference vectors with an aim to achieve recognition accuracy higher than those obtained through conventional methods. Moreover, we will demonstrate the effectiveness of the proposed method by applying it to the wine data and English vowel data and comparing its results with those of conventional methods.
Keywords: Clustering, LVQ, ACO, SOM, k-means.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12563898 A Soft Set based Group Decision Making Method with Criteria Weight
Authors: Samsiah Abdul Razak, Daud Mohamad
Abstract:
Molodstov-s soft sets theory was originally proposed as general mathematical tool for dealing with uncertainty problems. The matrix form has been introduced in soft set and some of its properties have been discussed. However, the formulation of soft matrix in group decision making problem only with equal importance weights of criteria, which does not show the true opinion of decision maker on each criteria. The aim of this paper is to propose a method for solving group decision making problem incorporating the importance of criteria by using soft matrices in a more objective manner. The weight of each criterion is calculated by using the Analytic Hierarchy Process (AHP) method. An example of house selection process is given to illustrate the effectiveness of the proposed method.Keywords: Soft set, Soft Matrix, Soft max-min decision making (SMmDM), Analytic hierarchy process (AHP)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19003897 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: A. Appe, B. Poluparthi, L. Kasivajjula, U. Mv, S. Bagadi, P. Modi, A. Singh, H. Gunupudi, S. Troiano, J. Paul, J. Stovall, J. Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data are considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP (SHapley Additive exPlanations), to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since it is data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for e.g., quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP, a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.
Keywords: Competition, DAGs, hospital, healthcare, machine learning, market share, random forest, SHAP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2863896 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning
Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685