Search results for: grasshopper optimization algorithm
1800 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection
Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari
Abstract:
In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs
Procedia PDF Downloads 3641799 Solving Dimensionality Problem and Finding Statistical Constructs on Latent Regression Models: A Novel Methodology with Real Data Application
Authors: Sergio Paez Moncaleano, Alvaro Mauricio Montenegro
Abstract:
This paper presents a novel statistical methodology for measuring and founding constructs in Latent Regression Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations on Item Response Theory (IRT). In addition, based on the fundamentals of submodel theory and with a convergence of many ideas of IRT, we propose an algorithm not just to solve the dimensionality problem (nowadays an open discussion) but a new research field that promises more fear and realistic qualifications for examiners and a revolution on IRT and educational research. In the end, the methodology is applied to a set of real data set presenting impressive results for the coherence, speed and precision. Acknowledgments: This research was financed by Colciencias through the project: 'Multidimensional Item Response Theory Models for Practical Application in Large Test Designed to Measure Multiple Constructs' and both authors belong to SICS Research Group from Universidad Nacional de Colombia.Keywords: item response theory, dimensionality, submodel theory, factorial analysis
Procedia PDF Downloads 3721798 Optimizing Scribe Resourcing to Improve Hospitalist Workloads
Authors: Ahmed Hamzi, Bryan Norman
Abstract:
Having scribes help document patient records in electronic health record systems can improve hospitalists’ productivity. But hospitals need to determine the optimum number of scribes to hire to maximize scribe cost effectiveness. Scribe attendance uncertainty due to planned and unplanned absences is a primary challenge. This paper presents simulation and analytical models to determine the optimum number of scribes for a hospital to hire. Scribe staffing practices vary from one context to another; different staffing scenarios are considered where having extra attending scribes provides or does not provide additional value and utilizing on-call scribes to fill in for potentially absent scribes. These staffing scenarios are assessed for different scribe revenue ratios (ratio of the value of the scribe relative to scribe costs) ranging from 100% to 300%. The optimum solution depends on the absenteeism rate, revenue ratio, and desired service level. The analytical model obtains solutions easier and faster than the simulation model, but the simulation model is more accurate. Therefore, the analytical model’s solutions are compared with the simulation model’s solutions regarding both the number of scribes hired and cost-effectiveness. Additionally, an Excel tool has been developed to facilitate decision-makers in easily obtaining solutions using the analytical model.Keywords: hospitalists, workload, optimization cost, economic analysis
Procedia PDF Downloads 451797 The Effect of Damping Treatment for Noise Control on Offshore Platforms Using Statistical Energy Analysis
Authors: Ji Xi, Cheng Song Chin, Ehsan Mesbahi
Abstract:
Structure-borne noise is an important aspect of offshore platform sound field. It can be generated either directly by vibrating machineries induced mechanical force, indirectly by the excitation of structure or excitation by incident airborne noise. Therefore, limiting of the transmission of vibration energy throughout the offshore platform is the key to control the structure-borne noise. This is usually done by introducing damping treatment to the steel structures. Two types of damping treatment using on-board are presented. By conducting a statistical energy analysis (SEA) simulation on a jack-up rig, the noise level in the source room, the neighboring rooms, and remote living quarter cabins are compared before and after the damping treatments been applied. The results demonstrated that, in the source neighboring room and living quarter area, there is a significant noise reduction with the damping treatment applied, whereas in the source room where air-borne sound predominates that of structure-borne sound, the impact is not obvious. The subsequent optimization design of damping treatment in the offshore platform can be made which enable acoustic professionals to implement noise control during the design stage for offshore crews’ hearing protection and habitant comfortability.Keywords: statistical energy analysis, damping treatment, noise control, offshore platform
Procedia PDF Downloads 5551796 Representativity Based Wasserstein Active Regression
Authors: Benjamin Bobbia, Matthias Picard
Abstract:
In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression
Procedia PDF Downloads 801795 Silymarin Loaded Mesoporous Silica Nanoparticles: Preparation, Optimization, Pharmacodynamic and Oral Multi-Dose Safety Assessment
Authors: Sarah Nasr, Maha M. A. Nasra, Ossama Y. Abdallah
Abstract:
The present work aimed to prepare Silymarin loaded MCM-41 type mesoporous silica nanoparticles (MSNs) and to assess the system’s solubility enhancement ability on the pharmacodynamic performance of Silymarin as a hepatoprotective agent. MSNs prepared by soft-templating technique, were loaded with Silymarin, characterized for particle size, zeta potential, surface properties, DSC and XRPD. DSC and specific surface area data confirmed deposition of Silymarin in an amorphous state in MSNs’ pores. In-vitro drug dissolution testing displayed enhanced dissolution rate of Silymarin upon loading on MSNs. High dose Acetaminophen was then used to inflict hepatic injury upon albino male Wistar rats simultaneously receiving either free Silymarin, Silymarin loaded MSNs or blank MSNs. Plasma AST, ALT, albumin and total protein and liver homogenate content of TBARs or LDH as measures of antioxidant drug action were assessed for all animal groups. Results showed a significant superiority of Silymarin loaded MSNs to free drug in almost all parameters. Meanwhile prolonged administration of blank MSNs had no evident toxicity on rats.Keywords: mesoporous silica nanoparticles, safety, solubility enhancement, silymarin
Procedia PDF Downloads 3321794 Double Encrypted Data Communication Using Cryptography and Steganography
Authors: Adine Barett, Jermel Watson, Anteneh Girma, Kacem Thabet
Abstract:
In information security, secure communication of data across networks has always been a problem at the forefront. Transfer of information across networks is susceptible to being exploited by attackers engaging in malicious activity. In this paper, we leverage steganography and cryptography to create a layered security solution to protect the information being transmitted. The first layer of security leverages crypto- graphic techniques to scramble the information so that it cannot be deciphered even if the steganography-based layer is compromised. The second layer of security relies on steganography to disguise the encrypted in- formation so that it cannot be seen. We consider three cryptographic cipher methods in the cryptography layer, namely, Playfair cipher, Blowfish cipher, and Hills cipher. Then, the encrypted message is passed through the least significant bit (LSB) to the steganography algorithm for further encryption. Both encryption approaches are combined efficiently to help secure information in transit over a network. This multi-layered encryption is a solution that will benefit cloud platforms, social media platforms and networks that regularly transfer private information such as banks and insurance companies.Keywords: cryptography, steganography, layered security, Cipher, encryption
Procedia PDF Downloads 851793 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction
Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho
Abstract:
Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.Keywords: computed tomography, computed laminography, compressive sending, low-dose
Procedia PDF Downloads 4641792 Challenges and Opportunities for Implementing Integrated Project Delivery Method in Public Sector Construction
Authors: Ahsan Ahmed, Ming Lu, Syed Zaidi, Farhan Khan
Abstract:
The Integrated Project Delivery (IPD) method has been proposed as the solution to tackle complexity and fragmentation in the real world while addressing the construction industry’s growing needs for productivity and sustainability. Although the private sector has taken the initiative in implementing IPD and taken advantage of new technology such as building information modeling (BIM) in delivering projects, IPD remains less known and rarely used in public sector construction. The focus of this paper is set on the use of IPD in projects in public sector, which is potentially complemented by the use of analytical functionalities for workface planning and construction oriented design enabled by recent research advances in BIM. Experiences and lessons learned from implementing IPD in the private sector and in BIM-based construction automation research would play a vital role in reducing barriers and eliminating issues in connection with project delivery in the public sector. The paper elaborates issues challenges, contractual relationships and the interactions throughout the planning, design and construction phases in the context of implementing IPD on construction projects in the public sector. A slab construction case is used as a ‘sandbox’ model to elaborate (1) the ideal way of communication, integration, and collaboration among all the parties involved in project delivery in planning and (2) the execution of projects by using IDP principles and optimization, simulation analyses.Keywords: integrated project delivery, IPD, building information modeling, BIM
Procedia PDF Downloads 2021791 Component Based Testing Using Clustering and Support Vector Machine
Authors: Iqbaldeep Kaur, Amarjeet Kaur
Abstract:
Software Reusability is important part of software development. So component based software development in case of software testing has gained a lot of practical importance in the field of software engineering from academic researcher and also from software development industry perspective. Finding test cases for efficient reuse of test cases is one of the important problems aimed by researcher. Clustering reduce the search space, reuse test cases by grouping similar entities according to requirements ensuring reduced time complexity as it reduce the search time for retrieval the test cases. In this research paper we proposed approach for re-usability of test cases by unsupervised approach. In unsupervised learning we proposed k-mean and Support Vector Machine. We have designed the algorithm for requirement and test case document clustering according to its tf-idf vector space and the output is set of highly cohesive pattern groups.Keywords: software testing, reusability, clustering, k-mean, SVM
Procedia PDF Downloads 4301790 Constructing White-Box Implementations Based on Threshold Shares and Composite Fields
Authors: Tingting Lin, Manfred von Willich, Dafu Lou, Phil Eisen
Abstract:
A white-box implementation of a cryptographic algorithm is a software implementation intended to resist extraction of the secret key by an adversary. To date, most of the white-box techniques are used to protect block cipher implementations. However, a large proportion of the white-box implementations are proven to be vulnerable to affine equivalence attacks and other algebraic attacks, as well as differential computation analysis (DCA). In this paper, we identify a class of block ciphers for which we propose a method of constructing white-box implementations. Our method is based on threshold implementations and operations in composite fields. The resulting implementations consist of lookup tables and few exclusive OR operations. All intermediate values (inputs and outputs of the lookup tables) are masked. The threshold implementation makes the distribution of the masked values uniform and independent of the original inputs, and the operations in composite fields reduce the size of the lookup tables. The white-box implementations can provide resistance against algebraic attacks and DCA-like attacks.Keywords: white-box, block cipher, composite field, threshold implementation
Procedia PDF Downloads 1681789 Internet of Things Edge Device Power Modelling and Optimization Simulator
Authors: Cian O'Shea, Ross O'Halloran, Peter Haigh
Abstract:
Wireless Sensor Networks (WSN) are Internet of Things (IoT) edge devices. They are becoming widely adopted in many industries, including health care, building energy management, and conditional monitoring. As the scale of WSN deployments increases, the cost and complexity of battery replacement and disposal become more significant and in time may become a barrier to adoption. Harvesting ambient energies provide a pathway to reducing dependence on batteries and in the future may lead to autonomously powered sensors. This work describes a simulation tool that enables the user to predict the battery life of a wireless sensor that utilizes energy harvesting to supplement the battery power. To create this simulator, all aspects of a typical WSN edge device were modelled including, sensors, transceiver, and microcontroller as well as the energy source components (batteries, solar cells, thermoelectric generators (TEG), supercapacitors and DC/DC converters). The tool allows the user to plug and play different pre characterized devices as well as add user-defined devices. The goal of this simulation tool is to predict the lifetime of a device and scope for extension using ambient energy sources.Keywords: Wireless Sensor Network, IoT, edge device, simulation, solar cells, TEG, supercapacitor, energy harvesting
Procedia PDF Downloads 1301788 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca
Abstract:
This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation
Procedia PDF Downloads 2301787 Reduction in Hot Metal Silicon through Statistical Analysis at G-Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, Santanu Mallick, Abhiram Jha, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The quality of hot metal at any blast furnace is judged by the silicon content in it. Lower hot metal silicon not only enhances process efficiency at steel melting shops but also reduces hot metal costs. The Hot metal produced at G-Blast furnace Tata Steel Jamshedpur has a significantly higher Si content than Benchmark Blast furnaces. The higher content of hot metal Si is mainly due to inferior raw material quality than those used in benchmark blast furnaces. With minimum control over raw material quality, the only option left to control hot metal Si is via optimizing the furnace parameters. Therefore, in order to identify the levers to reduce hot metal Si, Data mining was carried out, and multiple regression models were developed. The statistical analysis revealed that Slag B3{(CaO+MgO)/SiO2}, Slag Alumina and Hot metal temperature are key controllable parameters affecting hot metal silicon. Contour Plots were used to determine the optimum range of levels identified through statistical analysis. A trial plan was formulated to operate relevant parameters, at G blast furnace, in the identified range to reduce hot metal silicon. This paper details out the process followed and subsequent reduction in hot metal silicon by 15% at G blast furnace.Keywords: blast furnace, optimization, silicon, statistical tools
Procedia PDF Downloads 2231786 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: grayscale image format, image fusing, RGB image format, SURF detection, YCbCr image format
Procedia PDF Downloads 3771785 Towards Computational Fluid Dynamics Based Methodology to Accelerate Bioprocess Scale Up and Scale Down
Authors: Vishal Kumar Singh
Abstract:
Bioprocess development is a time-constrained activity aimed at harnessing the full potential of culture performance in an ambience that is not natural to cells. Even with the use of chemically defined media and feeds, a significant amount of time is devoted in identifying the apt operating parameters. In addition, the scale-up of these processes is often accompanied by loss of antibody titer and product quality, which further delays the commercialization of the drug product. In such a scenario, the investigation of this disparity of culture performance is done by further experimentation at a smaller scale that is representative of at-scale production bioreactors. These scale-down model developments are also time-intensive. In this study, a computation fluid dynamics-based multi-objective scaling approach has been illustrated to speed up the process transfer. For the implementation of this approach, a transient multiphase water-air system has been studied in Ansys CFX to visualize the air bubble distribution and volumetric mass transfer coefficient (kLa) profiles, followed by the design of experiment based parametric optimization approach to define the operational space. The proposed approach is completely in silico and requires minimum experimentation, thereby rendering a high throughput to the overall process development.Keywords: bioprocess development, scale up, scale down, computation fluid dynamics, multi-objective, Ansys CFX, design of experiment
Procedia PDF Downloads 821784 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography
Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway
Abstract:
This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.Keywords: steganography, stego, LSB, crop
Procedia PDF Downloads 2691783 2D Numerical Modeling for Induced Current Distribution in Soil under Lightning Impulse Discharge
Authors: Fawwaz Eniola Fajingbesi, Nur Shahida Midia, Elsheikh M. A. Elsheikh, Siti Hajar Yusoff
Abstract:
Empirical analysis of lightning related phenomena in real time is extremely dangerous due to the relatively high electric discharge involved. Hence, design and optimization of efficient grounding systems depending on real time empirical methods are impeded. Using numerical methods, the dynamics of complex systems could be modeled hence solved as sets of linear and non-linear systems . In this work, the induced current distribution as lightning strike traverses the soil have been numerically modeled in a 2D axial-symmetry and solved using finite element method (FEM) in COMSOL Multiphysics 5.2 AC/DC module. Stratified and non- stratified electrode system were considered in the solved model and soil conductivity (σ) varied between 10 – 58 mS/m. The result discussed therein were the electric field distribution, current distribution and soil ionization phenomena. It can be concluded that the electric field and current distribution is influenced by the injected electric potential and the non-linearity in soil conductivity. The result from numerical calculation also agrees with previously laboratory scale empirical results.Keywords: current distribution, grounding systems, lightning discharge, numerical model, soil conductivity, soil ionization
Procedia PDF Downloads 3121782 DFIG-Based Wind Turbine with Shunt Active Power Filter Controlled by Double Nonlinear Predictive Controller
Authors: Abderrahmane El Kachani, El Mahjoub Chakir, Anass Ait Laachir, Abdelhamid Niaaniaa, Jamal Zerouaoui, Tarik Jarou
Abstract:
This paper presents a wind turbine based on the doubly fed induction generator (DFIG) connected to the utility grid through a shunt active power filter (SAPF). The whole system is controlled by a double nonlinear predictive controller (DNPC). A Taylor series expansion is used to predict the outputs of the system. The control law is calculated by optimization of the cost function. The first nonlinear predictive controller (NPC) is designed to ensure the high performance tracking of the rotor speed and regulate the rotor current of the DFIG, while the second one is designed to control the SAPF in order to compensate the harmonic produces by the three-phase diode bridge supplied by a passive circuit (rd, Ld). As a result, we obtain sinusoidal waveforms of the stator voltage and stator current. The proposed nonlinear predictive controllers (NPCs) are validated via simulation on a 1.5 MW DFIG-based wind turbine connected to an SAPF. The results obtained appear to be satisfactory and promising.Keywords: wind power, doubly fed induction generator, shunt active power filter, double nonlinear predictive controller
Procedia PDF Downloads 4161781 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 2111780 Oxidative Stress Markers in Sports Related to Training
Authors: V. Antevska, B. Dejanova, L. Todorovska, J. Pluncevic, E. Sivevska, S. Petrovska, S. Mancevska, I. Karagjozova
Abstract:
Introduction: The aim of this study was to optimise the laboratory oxidative stress (OS) markers in soccer players. Material and methods: In a number of 37 soccer players (21±3 years old) and 25 control subjects (sedenters), plasma samples were taken for d-ROMs (reactive oxygen metabolites) and NO (nitric oxide) determination. The d-ROMs test was performed by measurement of hydroperoxide levels (Diacron, Italy). For NO determination the method of nitrate enzyme reduction with the Greiss reagent was used (OXIS, USA). The parameters were taken after the training of the soccer players and were compared with the control group. Training was considered as maximal exercise treadmill test. The criteria of maximum loading for each subject was established as >95% maximal heart rate. Results: The level of d-ROMs was found to be increased in the soccer players vs. control group but no significant difference was noticed. After the training d-ROMs in soccer players showed increased value of 299±44 UCarr (p<0.05). NO showed increased level in all soccer players vs. controls but significant difference was found after the training 102±29 μmol (p<0.05). Conclusion: Due to these results we may suggest that the measuring these OS markers in sport medicine may be useful for better estimation and evaluation of the training program. More oxidative stress should be used to clarify optimization of the training intensity program.Keywords: oxidative stress markers, soccer players, training, sport
Procedia PDF Downloads 4471779 Syllogistic Reasoning with 108 Inference Rules While Case Quantities Change
Authors: Mikhail Zarechnev, Bora I. Kumova
Abstract:
A syllogism is a deductive inference scheme used to derive a conclusion from a set of premises. In a categorical syllogisms, there are only two premises and every premise and conclusion is given in form of a quantified relationship between two objects. The different order of objects in premises give classification known as figures. We have shown that the ordered combinations of 3 generalized quantifiers with certain figure provide in total of 108 syllogistic moods which can be considered as different inference rules. The classical syllogistic system allows to model human thought and reasoning with syllogistic structures always attracted the attention of cognitive scientists. Since automated reasoning is considered as part of learning subsystem of AI agents, syllogistic system can be applied for this approach. Another application of syllogistic system is related to inference mechanisms on the Semantic Web applications. In this paper we proposed the mathematical model and algorithm for syllogistic reasoning. Also the model of iterative syllogistic reasoning in case of continuous flows of incoming data based on case–based reasoning and possible applications of proposed system were discussed.Keywords: categorical syllogism, case-based reasoning, cognitive architecture, inference on the semantic web, syllogistic reasoning
Procedia PDF Downloads 4111778 Low Density Parity Check Codes
Authors: Kassoul Ilyes
Abstract:
The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.Keywords: LDPC, parity check matrix, 5G, BER, SNR
Procedia PDF Downloads 1541777 A Scalable Model of Fair Socioeconomic Relations Based on Blockchain and Machine Learning Algorithms-1: On Hyperinteraction and Intuition
Authors: Merey M. Sarsengeldin, Alexandr S. Kolokhmatov, Galiya Seidaliyeva, Alexandr Ozerov, Sanim T. Imatayeva
Abstract:
This series of interdisciplinary studies is an attempt to investigate and develop a scalable model of fair socioeconomic relations on the base of blockchain using positive psychology techniques and Machine Learning algorithms for data analytics. In this particular study, we use hyperinteraction approach and intuition to investigate their influence on 'wisdom of crowds' via created mobile application which was created for the purpose of this research. Along with the public blockchain and private Decentralized Autonomous Organization (DAO) which were elaborated by us on the base of Ethereum blockchain, a model of fair financial relations of members of DAO was developed. We developed a smart contract, so-called, Fair Price Protocol and use it for implementation of model. The data obtained from mobile application was analyzed by ML algorithms. A model was tested on football matches.Keywords: blockchain, Naïve Bayes algorithm, hyperinteraction, intuition, wisdom of crowd, decentralized autonomous organization
Procedia PDF Downloads 1691776 Clustering Performance Analysis using New Correlation-Based Cluster Validity Indices
Authors: Nathakhun Wiroonsri
Abstract:
There are various cluster validity measures used for evaluating clustering results. One of the main objectives of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weaknesses that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal option that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points are located in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios, including the well-known iris data set and a real-world marketing application, have been conducted to compare the proposed validity indices with several well-known ones.Keywords: clustering algorithm, cluster validity measure, correlation, data partitions, iris data set, marketing, pattern recognition
Procedia PDF Downloads 1031775 Intelligent Decision Support for Wind Park Operation: Machine-Learning Based Detection and Diagnosis of Anomalous Operating States
Authors: Angela Meyer
Abstract:
The operation and maintenance cost for wind parks make up a major fraction of the park’s overall lifetime cost. To minimize the cost and risk involved, an optimal operation and maintenance strategy requires continuous monitoring and analysis. In order to facilitate this, we present a decision support system that automatically scans the stream of telemetry sensor data generated from the turbines. By learning decision boundaries and normal reference operating states using machine learning algorithms, the decision support system can detect anomalous operating behavior in individual wind turbines and diagnose the involved turbine sub-systems. Operating personal can be alerted if a normal operating state boundary is exceeded. The presented decision support system and method are applicable for any turbine type and manufacturer providing telemetry data of the turbine operating state. We demonstrate the successful detection and diagnosis of anomalous operating states in a case study at a German onshore wind park comprised of Vestas V112 turbines.Keywords: anomaly detection, decision support, machine learning, monitoring, performance optimization, wind turbines
Procedia PDF Downloads 1671774 Despiking of Turbulent Flow Data in Gravel Bed Stream
Authors: Ratul Das
Abstract:
The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.Keywords: acoustic doppler velocimeter, gravel-bed, spike removal, reynolds shear stress, near-bed turbulence, velocity power spectra
Procedia PDF Downloads 2991773 Realistic Testing Procedure of Power Swing Blocking Function in Distance Relay
Authors: Farzad Razavi, Behrooz Taheri, Mohammad Parpaei, Mehdi Mohammadi Ghalesefidi, Siamak Zarei
Abstract:
As one of the major problems in protecting large-dimension power systems, power swing and its effect on distance have caused a lot of damages to energy transfer systems in many parts of the world. Therefore, power swing has gained attentions of many researchers, which has led to invention of different methods for power swing detection. Power swing detection algorithm is highly important in distance relay, but protection relays should have general requirements such as correct fault detection, response rate, and minimization of disturbances in a power system. To ensure meeting the requirements, protection relays need different tests during development, setup, maintenance, configuration, and troubleshooting steps. This paper covers power swing scheme of the modern numerical relay protection, 7sa522 to address the effect of the different fault types on the function of the power swing blocking. In this study, it was shown that the different fault types during power swing cause different time for unblocking distance relay.Keywords: power swing, distance relay, power system protection, relay test, transient in power system
Procedia PDF Downloads 3851772 Voice and Head Controlled Intelligent Wheelchair
Authors: Dechrit Maneetham
Abstract:
The aim of this paper was to design a void and head controlled electric power wheelchair (EPW). A novel activate the control system for quadriplegics with voice, head and neck mobility. Head movement has been used as a control interface for people with motor impairments in a range of applications. Acquiring measurements from the module is simplified through a synchronous a motor. Axis measures the two directions namely x and y. At the same time, patients can control the motorized wheelchair using voice signals (forward, backward, turn left, turn right, and stop) given by it self. The model of a dc motor is considered as a speed control by selection of a PID parameters using genetic algorithm. An experimental set-up constructed, which consists of micro controller as controller, a DC motor driven EPW and feedback elements. This paper is tuning methods of parameter for a pulse width modulation (PWM) control system. A speed controller has been designed successfully for closed loop of the dc motor so that the motor runs very closed to the reference speed and angle. Intelligent wheelchair can be used to ensure the person’s voice and head are attending the direction of travel asserted by a conventional, direction and speed control.Keywords: wheelchair, quadriplegia, rehabilitation , medical devices, speed control
Procedia PDF Downloads 5401771 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 245