Search results for: crow search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5210

Search results for: crow search algorithm

1790 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 134
1789 On an Approach for Rule Generation in Association Rule Mining

Authors: B. Chandra

Abstract:

In Association Rule Mining, much attention has been paid for developing algorithms for large (frequent/closed/maximal) itemsets but very little attention has been paid to improve the performance of rule generation algorithms. Rule generation is an important part of Association Rule Mining. In this paper, a novel approach named NARG (Association Rule using Antecedent Support) has been proposed for rule generation that uses memory resident data structure named FCET (Frequent Closed Enumeration Tree) to find frequent/closed itemsets. In addition, the computational speed of NARG is enhanced by giving importance to the rules that have lower antecedent support. Comparative performance evaluation of NARG with fast association rule mining algorithm for rule generation has been done on synthetic datasets and real life datasets (taken from UCI Machine Learning Repository). Performance analysis shows that NARG is computationally faster in comparison to the existing algorithms for rule generation.

Keywords: knowledge discovery, association rule mining, antecedent support, rule generation

Procedia PDF Downloads 325
1788 Epstein-Barr Virus-associated Diseases and TCM Syndromes Types: In Search for Correlation

Authors: Xu Yifei, Le Yining, Yang Qingluan, Tu Yanjie

Abstract:

Objective: This study aims to investigate the distribution features of Traditional Chinese Medicine (TCM) syndromes and syndrome elements in Epstein-Barr virus-associated diseases and then explores the relations between TCM syndromes or syndrome elements and laboratory indicators of Epstein-Barr virus-associated diseases. Methods: A cross-sectional study of 70 patients with EBV infection was described. We assessed the diagnostic information and laboratory indicators of these patients from Huashan Hospital Affiliated to Fudan University between November 2017 and July 2019. The disease diagnosis and syndrome differentiation were based on the diagnostic criteria of EBV-associated diseases and the theory of TCM respectively. Confidence correlation analysis, logistic regression analysis, cluster analysis, and the Sankey diagram were used to analyze the correlation between the data. Results: The differentiation of the 4 primary TCM syndromes in the collected patients was correlated with the indexes of immune function, liver function, inflammation, and anemia, especially the relationship between Qifen syndrome and high lactic acid dehydrogenase level. The common 11 TCM syndrome elements were associated with the increased CD3+ T cell rate, low hemoglobin level, high procalcitonin level, high lactic acid dehydrogenase level, and low albumin level. Conclusion: The changes in immune function indexes, procalcitonin, and liver function-related indexes in patients with EBV-associated diseases were consistent with the evolution law of TCM syndromes. This study provides a reference for judging the pathological stages of these kinds of diseases, predicting their prognosis, and guiding subsequent treatment strategies based on TCM syndrome type.

Keywords: EBV-associated diseases, traditional Chinese medicine syndrome, syndrome element, diagnostics

Procedia PDF Downloads 105
1787 Power Management Strategy for Solar-Wind-Diesel Stand-Alone Hybrid Energy System

Authors: Md. Aminul Islam, Adel Merabet, Rachid Beguenane, Hussein Ibrahim

Abstract:

This paper presents a simulation and mathematical model of stand-alone solar-wind-diesel based hybrid energy system (HES). A power management system is designed for multiple energy resources in a stand-alone hybrid energy system. Both Solar photovoltaic and wind energy conversion system consists of maximum power point tracking (MPPT), voltage regulation, and basic power electronic interfaces. An additional diesel generator is included to support and improve the reliability of stand-alone system when renewable energy sources are not available. A power management strategy is introduced to distribute the generated power among resistive load banks. The frequency regulation is developed with conventional phase locked loop (PLL) system. The power management algorithm was applied in Matlab®/Simulink® to simulate the results.

Keywords: solar photovoltaic, wind energy, diesel engine, hybrid energy system, power management, frequency and voltage regulation

Procedia PDF Downloads 454
1786 Expanding the Therapeutic Utility of Curcumin

Authors: Azza H. El-Medany, Hanan H. Hagar, Omnia A. Nayel, Jamila H. El-Medany

Abstract:

In search for drugs that can target cancer cell micro-environment in as much as being able to halt malignant cellular transformation, the natural dietary phytochemical curcumin was currently assessed in DMH-induced colorectal cancer rat model. The study enrolled 50 animals divided into a control group (n=10) and DMH-induced colorectal cancer control group (n=20) (20mg/kg-body weight for 28 weeks) versus curcumin-treated group (n=20) (160 mg/kg suspension daily oral for further 8 weeks). Treatment by curcumin succeeded to significantly decrease the percent of ACF and tended to normalize back the histological changes retrieved in adenomatous and stromal cells induced by DMH. The drug also significantly elevated GSH and significantly reduced most of the accompanying biochemical elevations (namely MDA, TNF-α, TGF-β and COX2) observed in colonic carcinomatous tissue, induced by DMH, thus succeeding to revert that of MDA, COX2 and TGF-β back to near normal as justified by being non-significantly altered as compared to normal controls. The only exception was PAF that was insignificantly altered by the drug. When taken together, it could be concluded that curcumin possess the potentiality to halt some of the orchestrated cross-talk between cancerous transformation and its micro-environmental niche that contributes to cancer initiation, progression and metastasis in this experimental cancer colon model. Envisioning these merits to a drug with already known safety preferentiality, awaits final results of current ongoing clinical trials, before curcumin can be added to the new therapeutic armamentarium of anticancer therapy.

Keywords: curcumin, dimethyl hydralazine, aberrant crypt foci, malondialdehyde, reduced glutathione, cyclooxygenase-2, tumour necrosis factor-alpha, transforming growth factor-beta, platelet activating factor

Procedia PDF Downloads 298
1785 The Nexus between Downstream Supply Chain Losses and Food Security in Nigeria: Empirical Evidence from the Yam Industry

Authors: Alban Igwe, Ijeoma Kalu, Alloy Ezirim

Abstract:

Food insecurity is a global problem, and the search for food security has assumed a central stage in the global development agenda as the United Nations currently placed zero hunger as a goal number in its sustainable development goals. Nigeria currently ranks 107th out of 113 countries in the global food security index (GFSI), a metric that defines a country's ability to furnish its citizens with food and nutrients for healthy living. Paradoxically, Nigeria is a global leader in food production, ranking 1st in yam (over 70% of global output), beans (over 41% of global output), cassava (20% of global output) and shea nuts, where it commands 53% of global output. Furthermore, it ranks 2nd in millet, sweet potatoes, and cashew nuts. It is Africa's largest producer of rice. So, it is apparent that Nigeria's food insecurity woes must relate to a factor other than food production. We investigated the nexus between food security and downstream supply chain losses in the yam industry with secondary data from the Food and Agricultural Organization (FAOSTAT) and the National Bureau of Statics for the decade 2012-2021. In analyzing the data, multiple regression techniques were used, and findings reveal that downstream losses have a strong positive correlation with food security (r = .763*) and a 58.3% variation in food security is explainable by post-downstream supply chain food losses. The study discovered that yam supply chain losses within the period under review averaged 50.6%, suggestive of the fact that downstream supply chain losses are the drainpipe and the major source of food insecurity in Nigeria. Therefore, the study concluded that there is a significant relationship between downstream supply chain losses and food insecurity and recommended the establishment of food supply chain structures and policies to enhance food security in Nigeria.

Keywords: food security, downstream supply chain losses, yam, nigeria, supply chain

Procedia PDF Downloads 91
1784 Intelligent Software Architecture and Automatic Re-Architecting Based on Machine Learning

Authors: Gebremeskel Hagos Gebremedhin, Feng Chong, Heyan Huang

Abstract:

Software system is the combination of architecture and organized components to accomplish a specific function or set of functions. A good software architecture facilitates application system development, promotes achievement of functional requirements, and supports system reconfiguration. We describe three studies demonstrating the utility of our architecture in the subdomain of mobile office robots and identify software engineering principles embodied in the architecture. The main aim of this paper is to analyze prove architecture design and automatic re-architecting using machine learning. Intelligence software architecture and automatic re-architecting process is reorganizing in to more suitable one of the software organizational structure system using the user access dataset for creating relationship among the components of the system. The 3-step approach of data mining was used to analyze effective recovery, transformation and implantation with the use of clustering algorithm. Therefore, automatic re-architecting without changing the source code is possible to solve the software complexity problem and system software reuse.

Keywords: intelligence, software architecture, re-architecting, software reuse, High level design

Procedia PDF Downloads 119
1783 Survey Research Assessment for Renewable Energy Integration into the Mining Industry

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

Mining operations are energy intensive, and the share of energy costs in total costs is often quoted in the range of 40 %. Saving on energy costs is, therefore, a key element of any mine operator. With the improving reliability and security of renewable energy (RE) sources, and requirements to reduce carbon dioxide emissions, perspectives for using RE in mining operations emerge. These aspects are stimulating the mining companies to search for ways to substitute fossil energy with RE. Hereby, the main purpose of this study is to present the survey research assessment in matter of finding out the key issues related to the integration of RE into mining activities, based on the mining and renewable energy experts’ opinion. The purpose of the paper is to present the outcomes of a survey conducted among mining and renewable energy experts about the feasibility of RE in mining operations. The survey research has been developed taking into consideration the following categories: first of all, the mining and renewable energy experts were chosen based on the specific criteria. Secondly, they were offered a questionnaire to gather their knowledge and opinions on incentives for mining operators to turn to RE, barriers and challenges to be expected, environmental effects, appropriate business models and the overall impact of RE on mining operations. The outcomes of the survey allow for the identification of factors which favor and disfavor decision-making on the use of RE in mining operations. It concludes with a set of recommendations for further study. One of them relates to a deeper analysis of benefits for mining operators when using RE, and another one suggests that appropriate business models considering economic and environmental issues need to be studied and developed. The results of the paper will be used for developing a hybrid optimized model which might be adopted at mines according to their operation processes as well as economic and environmental perspectives.

Keywords: carbon dioxide emissions, mining industry, photovoltaic, renewable energy, survey research, wind generation

Procedia PDF Downloads 358
1782 Using the Timepix Detector at CERN Accelerator Facilities

Authors: Andrii Natochii

Abstract:

The UA9 collaboration in the last two years has installed two different types of detectors to investigate the channeling effect in the bent silicon crystals with high-energy particles beam on the CERN accelerator facilities: Cherenkov detector CpFM and silicon pixel detector Timepix. In the current work, we describe the main performances of the Timepix detector operation at the SPS and H8 extracted beamline at CERN. We are presenting some detector calibration results and tuning. Our research topics also cover a cluster analysis algorithm for the particle hits reconstruction. We describe the optimal acquisition setup for the Timepix device and the edges of its functionality for the high energy and flux beam monitoring. The measurements of the crystal parameters are very important for the future bent crystal applications and needs a track reconstruction apparatus. Thus, it was decided to construct a short range (1.2 m long) particle telescope based on the Timepix sensors and test it at H8 SPS extraction beamline. The obtained results will be shown as well.

Keywords: beam monitoring, channeling, particle tracking, Timepix detector

Procedia PDF Downloads 180
1781 Literature Review: Adversarial Machine Learning Defense in Malware Detection

Authors: Leidy M. Aldana, Jorge E. Camargo

Abstract:

Adversarial Machine Learning has gained importance in recent years as Cybersecurity has gained too, especially malware, it has affected different entities and people in recent years. This paper shows a literature review about defense methods created to prevent adversarial machine learning attacks, firstable it shows an introduction about the context and the description of some terms, in the results section some of the attacks are described, focusing on detecting adversarial examples before coming to the machine learning algorithm and showing other categories that exist in defense. A method with five steps is proposed in the method section in order to define a way to make the literature review; in addition, this paper summarizes the contributions in this research field in the last seven years to identify research directions in this area. About the findings, the category with least quantity of challenges in defense is the Detection of adversarial examples being this one a viable research route with the adaptive approach in attack and defense.

Keywords: Malware, adversarial, machine learning, defense, attack

Procedia PDF Downloads 63
1780 Personalized Email Marketing Strategy: A Reinforcement Learning Approach

Authors: Lei Zhang, Tingting Xu, Jun He, Zhenyu Yan

Abstract:

Email marketing is one of the most important segments of online marketing. It has been proved to be the most effective way to acquire and retain customers. The email content is vital to customers. Different customers may have different familiarity with a product, so a successful marketing strategy must personalize email content based on individual customers’ product affinity. In this study, we build our personalized email marketing strategy with three types of emails: nurture, promotion, and conversion. Each type of email has a different influence on customers. We investigate this difference by analyzing customers’ open rates, click rates and opt-out rates. Feature importance from response models is also analyzed. The goal of the marketing strategy is to improve the click rate on conversion-type emails. To build the personalized strategy, we formulate the problem as a reinforcement learning problem and adopt a Q-learning algorithm with variations. The simulation results show that our model-based strategy outperforms the current marketer’s strategy.

Keywords: email marketing, email content, reinforcement learning, machine learning, Q-learning

Procedia PDF Downloads 194
1779 Harmonic Data Preparation for Clustering and Classification

Authors: Ali Asheibi

Abstract:

The rapid increase in the size of databases required to store power quality monitoring data has demanded new techniques for analysing and understanding the data. One suggested technique to assist in analysis is data mining. Preparing raw data to be ready for data mining exploration take up most of the effort and time spent in the whole data mining process. Clustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. Large amounts of harmonic data have been collected from an actual harmonic monitoring system in a distribution system in Australia for three years. This amount of acquired data makes it difficult to identify operational events that significantly impact the harmonics generated on the system. In this paper, harmonic data preparation processes to better understanding of the data have been presented. Underlying classes in this data has then been identified using clustering technique based on the Minimum Message Length (MML) method. The underlying operational information contained within the clusters can be rapidly visualised by the engineers. The C5.0 algorithm was used for classification and interpretation of the generated clusters.

Keywords: data mining, harmonic data, clustering, classification

Procedia PDF Downloads 248
1778 Using Printouts as Social Media Evidence and Its Authentication in the Courtroom

Authors: Chih-Ping Chang

Abstract:

Different from traditional objective evidence, social media evidence has its own characteristics with easily tampering, recoverability, and cannot be read without using other devices (such as a computer). Simply taking a screenshot from social network sites must be questioned its original identity. When the police search and seizure digital information, a common way they use is to directly print out digital data obtained and ask the signature of the parties at the presence, without taking original digital data back. In addition to the issue on its original identity, this conduct to obtain evidence may have another two results. First, it will easily allege that is tampering evidence because the police wanted to frame the suspect and falsified evidence. Second, it is not easy to discovery hidden information. The core evidence associated with crime may not appear in the contents of files. Through discovery the original file, data related to the file, such as the original producer, creation time, modification date, and even GPS location display can be revealed from hidden information. Therefore, how to show this kind of evidence in the courtroom will be arguably the most important task for ruling social media evidence. This article, first, will introduce forensic software, like EnCase, TCT, FTK, and analyze their function to prove the identity with another digital data. Then turning back to the court, the second part of this article will discuss legal standard for authentication of social media evidence and application of that forensic software in the courtroom. As the conclusion, this article will provide a rethinking, that is, what kind of authenticity is this rule of evidence chase for. Does legal system automatically operate the transcription of scientific knowledge? Or furthermore, it wants to better render justice, not only under scientific fact, but through multivariate debating.

Keywords: federal rule of evidence, internet forensic, printouts as evidence, social media evidence, United States v. Vayner

Procedia PDF Downloads 290
1777 Implementation of Elliptic Curve Cryptography Encryption Engine on a FPGA

Authors: Mohamad Khairi Ishak

Abstract:

Conventional public key crypto systems such as RSA (Ron Rivest, Adi Shamir and Leonard Adleman), DSA (Digital Signature Algorithm), and Elgamal are no longer efficient to be implemented in the small, memory constrained devices. Elliptic Curve Cryptography (ECC), which allows smaller key length as compared to conventional public key crypto systems, has thus become a very attractive choice for many applications. This paper describes implementation of an elliptic curve cryptography (ECC) encryption engine on a FPGA. The system has been implemented in 2 different key sizes, which are 131 bits and 163 bits. Area and timing analysis are provided for both key sizes for comparison. The crypto system, which has been implemented on Altera’s EPF10K200SBC600-1, has a hardware size of 5945/9984 and 6913/9984 of logic cells for 131 bits implementation and 163 bits implementation respectively. The crypto system operates up to 43 MHz, and performs point multiplication operation in 11.3 ms for 131 bits implementation and 14.9 ms for 163 bits implementation. In terms of speed, our crypto system is about 8 times faster than the software implementation of the same system.

Keywords: elliptic curve cryptography, FPGA, key sizes, memory

Procedia PDF Downloads 323
1776 Parameters Optimization of the Laminated Composite Plate for Sound Transmission Problem

Authors: Yu T. Tsai, Jin H. Huang

Abstract:

In this paper, the specific sound transmission loss (TL) of the laminated composite plate (LCP) with different material properties in each layer is investigated. The numerical method to obtain the TL of the LCP is proposed by using elastic plate theory. The transfer matrix approach is novelty presented for computational efficiency in solving the numerous layers of dynamic stiffness matrix (D-matrix) of the LCP. Besides the numerical simulations for calculating the TL of the LCP, the material properties inverse method is presented for the design of a laminated composite plate analogous to a metallic plate with a specified TL. As a result, it demonstrates that the proposed computational algorithm exhibits high efficiency with a small number of iterations for achieving the goal. This method can be effectively employed to design and develop tailor-made materials for various applications.

Keywords: sound transmission loss, laminated composite plate, transfer matrix approach, inverse problem, elastic plate theory, material properties

Procedia PDF Downloads 388
1775 The Application of Participatory Social Media in Collaborative Planning: A Systematic Review

Authors: Yujie Chen , Zhen Li

Abstract:

In the context of planning transformation, how to promote public participation in the formulation and implementation of collaborative planning has been the focused issue of discussion. However, existing studies have often been case-specific or focused on a specific design field, leaving the role of participatory social media (PSM) in urban collaborative planning generally questioned. A systematic database search was conducted in December 2019. Articles and projects were eligible if they reported a quantitative empirical study applying participatory social media in the collaborative planning process (a prospective, retrospective, experimental, longitudinal research, or collective actions in planning practices). Twenty studies and seven projects were included in the review. Findings showed that social media are generally applied in public spatial behavior, transportation behavior, and community planning fields, with new technologies and new datasets. PSM has provided a new platform for participatory design, decision analysis, and collaborative negotiation most widely used in participatory design. Findings extracted several existing forms of PSM. PSM mainly act as three roles: the language of decision-making for communication, study mode for spatial evaluation, and decision agenda for interactive decision support. Three optimization content of PSM were recognized, including improving participatory scale, improvement of the grass-root organization, and promotion of politics. However, basically, participants only could provide information and comment through PSM in the future collaborative planning process, therefore the issues of low data response rate, poor spatial data quality, and participation sustainability issues worth more attention and solutions.

Keywords: participatory social media, collaborative planning, planning workshop, application mode

Procedia PDF Downloads 133
1774 Application of Artificial Neural Network for Prediction of High Tensile Steel Strands in Post-Tensioned Slabs

Authors: Gaurav Sancheti

Abstract:

This study presents an impacting approach of Artificial Neural Networks (ANNs) in determining the quantity of High Tensile Steel (HTS) strands required in post-tensioned (PT) slabs. Various PT slab configurations were generated by varying the span and depth of the slab. For each of these slab configurations, quantity of required HTS strands were recorded. ANNs with backpropagation algorithm and varying architectures were developed and their performance was evaluated in terms of Mean Square Error (MSE). The recorded data for the quantity of HTS strands was used as a feeder database for training the developed ANNs. The networks were validated using various validation techniques. The results show that the proposed ANNs have a great potential with good prediction and generalization capability.

Keywords: artificial neural networks, back propagation, conceptual design, high tensile steel strands, post tensioned slabs, validation techniques

Procedia PDF Downloads 221
1773 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle

Authors: Ryan Messina, Mehedi Hasan

Abstract:

This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.

Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking

Procedia PDF Downloads 204
1772 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 374
1771 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 230
1770 Design and Motion Control of a Two-Wheel Inverted Pendulum Robot

Authors: Shiuh-Jer Huang, Su-Shean Chen, Sheam-Chyun Lin

Abstract:

Two-wheel inverted pendulum robot (TWIPR) is designed with two-hub DC motors for human riding and motion control evaluation. In order to measure the tilt angle and angular velocity of the inverted pendulum robot, accelerometer and gyroscope sensors are chosen. The mobile robot’s moving position and velocity were estimated based on DC motor built in hall sensors. The control kernel of this electric mobile robot is designed with embedded Arduino Nano microprocessor. A handle bar was designed to work as steering mechanism. The intelligent model-free fuzzy sliding mode control (FSMC) was employed as the main control algorithm for this mobile robot motion monitoring with different control purpose adjustment. The intelligent controllers were designed for balance control, and moving speed control purposes of this robot under different operation conditions and the control performance were evaluated based on experimental results.

Keywords: balance control, speed control, intelligent controller, two wheel inverted pendulum

Procedia PDF Downloads 224
1769 Research on Development and Accuracy Improvement of an Explosion Proof Combustible Gas Leak Detector Using an IR Sensor

Authors: Gyoutae Park, Seungho Han, Byungduk Kim, Youngdo Jo, Yongsop Shim, Yeonjae Lee, Sangguk Ahn, Hiesik Kim, Jungil Park

Abstract:

In this paper, we presented not only development technology of an explosion proof type and portable combustible gas leak detector but also algorithm to improve accuracy for measuring gas concentrations. The presented techniques are to apply the flame-proof enclosure and intrinsic safe explosion proof to an infrared gas leak detector at first in Korea and to improve accuracy using linearization recursion equation and Lagrange interpolation polynomial. Together, we tested sensor characteristics and calibrated suitable input gases and output voltages. Then, we advanced the performances of combustible gaseous detectors through reflecting demands of gas safety management fields. To check performances of two company's detectors, we achieved the measurement tests with eight standard gases made by Korea Gas Safety Corporation. We demonstrated our instruments better in detecting accuracy other than detectors through experimental results.

Keywords: accuracy improvement, IR gas sensor, gas leak, detector

Procedia PDF Downloads 391
1768 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 265
1767 Frequency of Alloimmunization in Sickle Cell Disease Patients in Africa: A Systematic Review with Meta-analysis

Authors: Theresa Ukamaka Nwagha, Angela Ogechukwu Ugwu, Martins Nweke

Abstract:

Background and Objectives: Blood transfusion is an effective and proven treatment for some severe complications of sickle cell disease. Recurrent transfusions have put patients with sickle cell disease at risk of developing antibodies against the various antigens they were exposed to. This study aims to investigate the frequency of red blood cell alloimmunization in patients with sickle disease in Africa. Materials and Methods: This is a systematic review of peer-reviewed literature published in English. The review was conducted consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist. Data sources for the review include MEDLINE, PubMed, CINAHL, and Academic Search Complete. Included in this review are articles that reported the frequency/prevalence of red blood cell alloimmunization in sickle cell disease patients in Africa. Eligible studies were subjected to independent full-text screening and data extraction. Risk of bias assessment was conducted with the aid of the mixed method appraisal tool. We employed a random-effects model of meta-analysis to estimate the pooled prevalence. We computed Cochrane’s Q statistics and I2 and prediction interval to quantify heterogeneity in effect size. Results: The prevalence estimates range from 2.6% to 29%. Pooled prevalence was estimated to be 10.4% (CI 7.7.–13.8); PI = 3.0 – 34.0%), with significant heterogeneity (I2 = 84.62; PI = 2.0-32.0%) and publication bias (Egger’s t-test = 1.744, p = 0.0965). Conclusion: The frequency of red cell alloantibody varies considerably in Africa. The alloantibodies appeared frequent in this order: the Rhesus, Kell, Lewis, Duffy, MNS, and Lutheran

Keywords: frequency, red blood cell, alloimmunization, sickle cell disease, Africa

Procedia PDF Downloads 100
1766 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 256
1765 Numerical Model for Investigation of Recombination Mechanisms in Graphene-Bonded Perovskite Solar Cells

Authors: Amir Sharifi Miavaghi

Abstract:

It is believed recombination mechnisms in graphene-bonded perovskite solar cells based on numerical model in which doped-graphene structures are employed as anode/cathode bonding semiconductor. Moreover, th‌‌‌‌e da‌‌‌‌‌rk-li‌‌‌‌‌ght c‌‌‌‌urrent d‌‌‌‌ens‌‌‌‌ity-vo‌‌‌‌‌‌‌ltage density-voltage cu‌‌‌‌‌‌‌‌‌‌‌rves are investigated by regression analysis. L‌‌‌oss m‌‌‌‌echa‌‌‌‌nisms suc‌‌‌h a‌‌‌‌‌‌s ba‌‌‌‌ck c‌‌‌ontact b‌‌‌‌‌arrier, d‌‌‌‌eep surface defect i‌‌‌‌n t‌‌‌‌‌‌‌he adsorbent la‌‌‌yer is det‌‌‌‌‌ermined b‌‌‌y adapting th‌‌‌e sim‌‌‌‌‌ulated ce‌‌‌‌‌ll perfor‌‌‌‌‌mance to t‌‌‌‌he measure‌‌‌‌ments us‌‌‌‌ing the diffe‌‌‌‌‌‌rential evolu‌‌‌‌‌tion of th‌‌‌‌e global optimization algorithm. T‌‌‌‌he performance of t‌‌‌he c‌‌‌‌ell i‌‌‌‌n the connection proc‌‌‌‌‌ess incl‌‌‌‌‌‌udes J-V cur‌‌‌‌‌‌ves that are examined at di‌‌‌‌‌fferent tempe‌‌‌‌‌‌‌ratures an‌‌‌d op‌‌‌‌en cir‌‌‌‌cuit vol‌‌‌‌tage (V) und‌‌‌‌er differ‌‌‌‌‌ent light intensities as a function of temperature. Ba‌‌‌‌sed o‌‌‌n t‌‌‌he prop‌‌‌‌osed nu‌‌‌‌‌merical mod‌‌‌‌el a‌‌‌‌nd the acquired lo‌‌‌‌ss mecha‌‌‌‌‌‌nisms, our approach can be used to improve the efficiency of the solar cell further. Due to the high demand for alternative energy sources, solar cells are good alternatives for energy storage using the photovoltaic phenomenon.

Keywords: numerical model, recombination mechanism, graphen, perovskite solarcell

Procedia PDF Downloads 70
1764 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors

Authors: V. Rashtchi, H. Bizhani, F. R. Tatari

Abstract:

This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.

Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization

Procedia PDF Downloads 633
1763 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 136
1762 Policy and System Research for Health of Ageing Population

Authors: Sehrish Ather

Abstract:

Introduction: To improve organizational achievements through the production of new knowledge, health policy and system research is the basic requirement. An aging population is always the source of the increased burden of chronic diseases, disabilities, mental illnesses, and other co-morbidities; therefore the provision of quality health care services to every group of the population should be achieved by making strong policy and system research for the betterment of health care system. Unfortunately, the whole world is lacking policies and system research for providing health care to their elderly population. Materials and Methods: A literature review of published studies on aging diseases was done, ranging from the year 2011-2018. Geriatric, population, health policy, system, and research were the key terms used for the search. Databases searched were Google Scholar, PubMed, Science Direct, Ovid, and Research Gate. Grey literature was searched from various websites, including IHME, Library of the University of Lahore, World Health Organization (Ageing and Life Course), and Personal communication with Neuro-physicians. After careful reviewing published and un-published information, it was decided to carry on with commentary. Results and discussion: Most of the published studies have highlighted the need to advocate the funders of health policy and stakeholders of healthcare system research, and it was detected as a major issue, research on policy and healthcare system to provide health care to 'geriatric population' was found as highly neglected area. Conclusion: It is concluded that physicians are more involved with the policy and system research regarding any type of diseases, but scientists and researchers of basic and social science are less likely to be involved in methods used for health policy and system research due to lack of funding and resources. Therefore ageing diseases should be considered as a priority, and comprehensive policy and system research should be initiated for diseases of the geriatric population.

Keywords: geriatric population, health care system, health policy, system research

Procedia PDF Downloads 108
1761 The Acceptable Roles of Artificial Intelligence in the Judicial Reasoning Process

Authors: Sonia Anand Knowlton

Abstract:

There are some cases where we as a society feel deeply uncomfortable with the use of Artificial Intelligence (AI) tools in the judicial decision-making process, and justifiably so. A perfect example is COMPAS, an algorithmic model that predicts recidivism rates of offenders to assist in the determination of their bail conditions. COMPAS turned out to be extremely racist: it massively overpredicted recidivism rates of Black offenders and underpredicted recidivism rates of white offenders. At the same time, there are certain uses of AI in the judicial decision-making process that many would feel more comfortable with and even support. Take, for example, a “super-breathalyzer,” an (albeit imaginary) tool that uses AI to deliver highly detailed information about the subject of the breathalyzer test to the legal decision-makers analyzing their drunk-driving case. This article evaluates the point at which a judge’s use of AI tools begins to undermine the public’s trust in the administration of justice. It argues that the answer to this question depends on whether the AI tool is in a role in which it must perform a moral evaluation of a human being.

Keywords: artificial intelligence, judicial reasoning, morality, technology, algorithm

Procedia PDF Downloads 82