Search results for: miRNA:mRNA target prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4952

Search results for: miRNA:mRNA target prediction

4112 Use of Real Time Ultrasound for the Prediction of Carcass Composition in Serrana Goats

Authors: Antonio Monteiro, Jorge Azevedo, Severiano Silva, Alfredo Teixeira

Abstract:

The objective of this study was to compare the carcass and in vivo real-time ultrasound measurements (RTU) and their capacity to predict the composition of Serrana goats up to 40% of maturity. Twenty one females (11.1 ± 3.97 kg) and Twenty one males (15.6 ± 5.38 kg) were utilized to made in vivo measurements with a 5 MHz probe (ALOKA 500V scanner) at the 9th-10th, 10th-11th thoracic vertebrae (uT910 and uT1011, respectively), at the 1st- 2nd, 3rd-4th, and 4th-5th lumbar vertebrae (uL12, ul34 and uL45, respectively) and also at the 3rd-4th sternebrae (EEST). It was recorded the images of RTU measurements of Longissimus thoracis et lumborum muscle (LTL) depth (EM), width (LM), perimeter (PM), area (AM) and subcutaneous fat thickness (SFD) above the LTL, as well as the depth of tissues of the sternum (EEST) between the 3rd-4th sternebrae. All RTU images were analyzed using the ImageJ software. After slaughter, the carcasses were stored at 4 ºC for 24 h. After this period the carcasses were divided and the left half was entirely dissected into muscle, dissected fat (subcutaneous fat plus intermuscular fat) and bone. Prior to the dissection measurements equivalent to those obtained in vivo with RTU were recorded. Using the Statistica 5, correlation and regression analyses were performed. The prediction of carcass composition was achieved by stepwise regression procedure, with live weight and RTU measurements with and without transformation of variables to the same dimension. The RTU and carcass measurements, except for SFD measurements, showed high correlation (r > 0.60, P < 0.001). The RTU measurements and the live weight, showed ability to predict carcass composition on muscle (R2 = 0.99, P < 0.001), subcutaneous fat (R2 = 0.41, P < 0.001), intermuscular fat (R2 = 0.84, P < 0.001), dissected fat (R2 = 0.71, P < 0.001) and bone (R2 = 0.94, P < 0.001). The transformation of variables allowed a slight increase of precision, but with the increase in the number of variables, with the exception of subcutaneous fat prediction. In vivo measurements by RTU can be applied to predict kid goat carcass composition, from 5 measurements of RTU and the live weight.

Keywords: carcass, goats, real time, ultrasound

Procedia PDF Downloads 249
4111 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection

Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari

Abstract:

In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.

Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs

Procedia PDF Downloads 359
4110 Unsupervised Domain Adaptive Text Retrieval with Query Generation

Authors: Rui Yin, Haojie Wang, Xun Li

Abstract:

Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.

Keywords: dense retrieval, query generation, unsupervised training, text retrieval

Procedia PDF Downloads 57
4109 Numerical Prediction of Effects of Location of Across-the-Width Laminations on Tensile Properties of Rectangular Wires

Authors: Kazeem K. Adewole

Abstract:

This paper presents the finite element analysis numerical investigation of the effects of the location of across-the-width lamination on the tensile properties of rectangular wires for civil engineering applications. FE analysis revealed that the presence of the mid-thickness across-the-width lamination changes the cup and cone fracture shape exhibited by the lamination-free wire to a V-shaped fracture shape with an opening at the bottom/pointed end of the V-shape at the location of the mid-thickness across-the-width lamination. FE analysis also revealed that the presence of the mid-width across-the-thickness lamination changes the cup and cone fracture shape of the lamination-free wire without an opening to a cup and cone fracture shape with an opening at the location of the mid-width across-the-thickness lamination. The FE fracture behaviour prediction approach presented in this work serves as a tool for failure analysis of wires with lamination at different orientations which cannot be conducted experimentally.

Keywords: across-the-width lamination, tensile properties, lamination location, wire

Procedia PDF Downloads 465
4108 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 61
4107 Detection of Nanotoxic Material Using DNA Based QCM

Authors: Juneseok You, Chanho Park, Kuehwan Jang, Sungsoo Na

Abstract:

Sensing of nanotoxic materials is strongly important, as their engineering applications are growing recently and results in that nanotoxic material can harmfully influence human health and environment. In current study we report the quartz crystal microbalance (QCM)-based, in situ and real-time sensing of nanotoxic-material by frequency shift. We propose the in situ detection of nanotoxic material of zinc oxice by using QCM functionalized with a taget-specific DNA. Since the mass of a target material is comparable to that of an atom, the mass change caused by target binding to DNA on the quartz electrode is so small that it is practically difficult to detect the ions at low concentrations. In our study, we have demonstrated the in-situ and fast detection of zinc oxide using the quartz crystal microbalance (QCM). The detection was derived from the DNA hybridization between the DNA on the quartz electrode. The results suggest that QCM-based detection opens a new avenue for the development of a practical water-testing sensor.

Keywords: nanotoxic material, qcm, frequency, in situ sensing

Procedia PDF Downloads 413
4106 Promotion of Lipid Syntheses of Microalgae by Microfluidic-Assisted Membrane Distortion

Authors: Seul Ki Min, Gwang Heum Yoon, Jung Hyun Joo, Hwa Sung Shin

Abstract:

Cellular membrane distortion is known as a factor to change intracellular signaling. However, progress of relevant studies is difficult because there are no facilities that can control membrane distortion finely. In this study, we developed microfluidic device which can inflict mechanical stress on cell membrane of Chlamydomonas reinhardtii using regular height of the channels. And cellular physiological changes were analyzed from cells cultured in the device. Excessive calcium ion influx through into cytoplasm was induced from mechanical stress. The results revealed that compressed cells had up-regulated Mat3 mRNA which regulates cell size and cell cycle from a prolonged G1 phase. Additionally, TAG used for the production of biodiesel was raised rapidly from 4 h after compression. Taken together, membrane distortion can be considered as an attractive inducer for biofuel production.

Keywords: mechanical stress, membrane distortion, Chlamydomonas reinhardtii, deflagellation, cell cycle, lipid metabolism

Procedia PDF Downloads 364
4105 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images

Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam

Abstract:

The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.

Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy

Procedia PDF Downloads 68
4104 The Effort of Nutrition Status Improvement through Partnership with Early Age Education Institution on Urban Region, City of Semarang, Indonesia

Authors: Oktia Woro Kasmini Handayani, Sri Ratna Rahayu, Efa Nugroho, Bertakalswa Hermawati

Abstract:

In Indonesia, from 2007 until 2013, the prevalence of overnutrition in children under five years and school age tends to increase. Clean and Health Life Behavior of school children supporting nutrition status still below the determined target. On the other side, school institution is an ideal place to educate and form health behavior, that should be initiated as early as possible (Early Age Education/PAUD level). The objective of this research was to find out the effectivity of education model through partnership with school institution in urban region, city of Semarang, Central Java Province, Indonesia. The research used quantitative approach supported with qualitative data. The population consist of all mother having school children of ages 3-5 years within the research region; sampling technique was purposive sampling, as many as 237 mothers. Research instrument was Clean and Health Life Behavior evaluation questionaire, and video as education media. The research used experimental design. Data analysis used effectivity criteria from Sugiyono and 2 paired sampel t test. Education model optimalization in the effort to improve nutrition status indicates t test result with signification < 0.05 (there was significant effect before and after model intervention), with effectivity test result of 79% (effective), but still below expected target which is 80%. Education model need to be utilized and optimallized the implementation so that expected target reached.

Keywords: nutrition status, early age education, clean dan health life behavior, education model

Procedia PDF Downloads 373
4103 Grammar as a Logic of Labeling: A Computer Model

Authors: Jacques Lamarche, Juhani Dickinson

Abstract:

This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.

Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar

Procedia PDF Downloads 20
4102 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process

Procedia PDF Downloads 391
4101 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics

Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy

Abstract:

Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.

Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance

Procedia PDF Downloads 135
4100 Translation of Culture-Specific References in the Turkish Translation of Shakespeare's Macbeth

Authors: Feride Sumbul

Abstract:

Drama is a literary genre that mirrors the people and society and transfers the human nature and life to the reader or the audience within its own social-cultural structure. Each play takes on a new reality in the time and culture of the staging, and each performance actually brings a new interpretation to the play. Similarly, each translation adds a new meaning to the source text. In other words, the translated theatrical text transcends the boundaries of its language and culture and finds a new interpretation. Thus the translation of drama takes place as a transfer from one culture to another as a cross cultural communication. In this context, translating culture specific references play a key role in terms of reflecting cultural aspects of a target society. This study aims to explore the use of Venuti's translation principles of domestication and foreignization in the transfer of culture specific references in the Turkish translation of Shakespeare's Macbeth. Macbeth is to be compared with its Turkish version in terms of the transference of culture specific references such as religious, witchcraft, and mythological, which have no equivalent in the target language and culture. To evaluate these principles of Venuti, Davies’s translation strategies are also conducted. As a method, for the most part, he predominantly uses Davies’ method of ‘addition’ through adding extra information in the notes. For instance, rather than finding the Turkish renderings of them, the translator mostly chooses to transfer witchcraft references through retaining them in the target text, but he mainly adds extra information about the references in the notes. Therefore, the translator Nutku mostly uses Venuti’s translation principle of foreignization so that he preserves the foreignness of the theatrical text.

Keywords: drama translation, theatrical texts, culture specific references, Macbeth

Procedia PDF Downloads 148
4099 Clinical Prediction Rules for Using Open Kinetic Chain Exercise in Treatment of Knee Osteoarthritis

Authors: Mohamed Aly, Aliaa Rehan Youssef, Emad Sawerees, Mounir Guirgis

Abstract:

Relevance: Osteoarthritis (OA) is the most common degenerative disease seen in all populations. It causes disability and substantial socioeconomic burden. Evidence supports that exercise are the most effective conservative treatment for patients with OA. Therapists experience and clinical judgment play major role in exercise prescription and scientific evidence for this regard is lacking. The development of clinical prediction rules to identify patients who are most likely benefit from exercise may help solving this dilemma. Purpose: This study investigated whether body mass index and functional ability at baseline can predict patients’ response to a selected exercise program. Approach: Fifty-six patients, aged 35 to 65 years, completed an exercise program consisting of open kinetic chain strengthening and passive stretching exercises. The program was given for 3 sessions per week, 45 minutes per session, for 6 weeks Evaluation: At baseline and post treatment, pain severity was assessed using the numerical pain rating scale, whereas functional ability was being assessed by step test (ST), time up and go test (TUG) and 50 feet time walk test (50 FTW). After completing the program, global rate of change (GROC) score of greater than 4 was used to categorize patients as successful and non-successful. Thirty-eight patients (68%) had successful response to the intervention. Logistic regression showed that BMI and 50 FTW test were the only significant predictors. Based on the results, patients with BMI less than 34.71 kg/m2 and 50 FTW test less than 25.64 sec are 68% to 89% more likely to benefit from the exercise program. Conclusions: Clinicians should consider the described strengthening and flexibility exercise program for patents with BMI less than 34.7 Kg/m2 and 50 FTW faster than 25.6 seconds. The validity of these predictors should be investigated for other exercise.

Keywords: clinical prediction rule, knee osteoarthritis, physical therapy exercises, validity

Procedia PDF Downloads 404
4098 Effects of Performance Appraisal on Employee Productivity in Yobe State University, Damaturu, (A Case Study of the Department of Islamic Studies)

Authors: Adam Abdullahi Mohammed

Abstract:

Performance appraisal is an assessment made to ensure the level of a worker’s productivity in a given period of time. The appraisal system is divided into two categories that are traditional methods and modern methods, with emphasis based on the evaluation of work results. In the traditional approach of staff appraisal, which puts more emphasis on individual traits, supervisors are required to measure employees through interactions based on what they achieved with reference to job descriptions, as well as rating them based on questionnaires without staff interaction. These methods are not effective because staff may give biased information. The study will attempt to assess the effect of performance appraisal on employee productivity at Yobe State University, Damaturu. It is aimed at assessing the process, methods, and objectives of performance appraisal and its feedback to know how they affect the success of the appraisal, its results, and employee productivity. In this study, a quantitative research method is adopted in collecting and analyzing data, and a questionnaire will be used as data collecting instrument. As it is a case study, the target population is the staff of the department of Islamic Studies. The research will employ a census sampling technique where all the subjects in the target populations are given a chance to participate in the study. This sampling method was considered because the entire target population is considered researchable. The expected findings are that staff performance appraisal in the department of Islamic Studies has effects on employee productivity; this is to say if it is given due consideration and the needful being done will improve employee productivity.

Keywords: performance appraisal, employee productivity, Yobe state University, appraisal feedback

Procedia PDF Downloads 58
4097 The Application of Artificial Neural Networks for the Performance Prediction of Evacuated Tube Solar Air Collector with Phase Change Material

Authors: Sukhbir Singh

Abstract:

This paper describes the modeling of novel solar air collector (NSAC) system by using artificial neural network (ANN) model. The objective of the study is to demonstrate the application of the ANN model to predict the performance of the NSAC with acetamide as a phase change material (PCM) storage. Input data set consist of time, solar intensity and ambient temperature wherever as outlet air temperature of NSAC was considered as output. Experiments were conducted between 9.00 and 24.00 h in June and July 2014 underneath the prevailing atmospheric condition of Kurukshetra (city of the India). After that, experimental results were utilized to train the back propagation neural network (BPNN) to predict the outlet air temperature of NSAC. The results of proposed algorithm show that the BPNN is effective tool for the prediction of responses. The BPNN predicted results are 99% in agreement with the experimental results.

Keywords: Evacuated tube solar air collector, Artificial neural network, Phase change material, solar air collector

Procedia PDF Downloads 112
4096 Effect of Accelerated Ions Interacted with Al Targets Using Plasma Focus Device

Authors: Morteza Habibi, Reza Amrollahi

Abstract:

The Aluminum made targets were placed at the central part of a Fillipov type (90KJ) plasma focus cathode. These targets were exposed to perpendicular dense plasma stream incidence. Melt layer erosion by melt motion, surface smoothing, and bubble formation were some of different effects caused by diverse working conditions. Micro hardness of surface layer tends to decrease particularly in the central region of the sample where destruction is more intense. The most pronouced melt motion is registered in the region of the maximum gradient of pressure and the etching of aluminium surface is noticeable in the central part of target. The crater with a maximum depth of 200µm, and the diameter of about 8.5mm is observed close to the mountains. Adding Krypton admixture to the Deuterium gas lead to collapsing bubbles and greater surface damage.

Keywords: fillipov type plasma focus, al target interaction, bubbling effect, melt layer motion, surface smoothing

Procedia PDF Downloads 525
4095 The Theory behind Logistic Regression

Authors: Jan Henrik Wosnitza

Abstract:

The logistic regression has developed into a standard approach for estimating conditional probabilities in a wide range of applications including credit risk prediction. The article at hand contributes to the current literature on logistic regression fourfold: First, it is demonstrated that the binary logistic regression automatically meets its model assumptions under very general conditions. This result explains, at least in part, the logistic regression's popularity. Second, the requirement of homoscedasticity in the context of binary logistic regression is theoretically substantiated. The variances among the groups of defaulted and non-defaulted obligors have to be the same across the level of the aggregated default indicators in order to achieve linear logits. Third, this article sheds some light on the question why nonlinear logits might be superior to linear logits in case of a small amount of data. Fourth, an innovative methodology for estimating correlations between obligor-specific log-odds is proposed. In order to crystallize the key ideas, this paper focuses on the example of credit risk prediction. However, the results presented in this paper can easily be transferred to any other field of application.

Keywords: correlation, credit risk estimation, default correlation, homoscedasticity, logistic regression, nonlinear logistic regression

Procedia PDF Downloads 412
4094 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 78
4093 Runoff Simulation by Using WetSpa Model in Garmabrood Watershed of Mazandaran Province, Iran

Authors: Mohammad Reza Dahmardeh Ghaleno, Mohammad Nohtani, Saeedeh Khaledi

Abstract:

Hydrological models are applied to simulation and prediction floods in watersheds. WetSpa is a distributed, continuous and physically model with daily or hourly time step that explains of precipitation, runoff and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave Equation which depend on the slope, velocity and flow route characteristics. Garmabrood watershed located in Mazandaran province in Iran and passing over coordinates 53° 10´ 55" to 53° 38´ 20" E and 36° 06´ 45" to 36° 25´ 30"N. The area of the catchment is about 1133 km2 and elevations in the catchment range from 213 to 3136 m at the outlet, with average slope of 25.77 %. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe Model Efficiency Coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 61% and 83.17 % respectively.

Keywords: watershed simulation, WetSpa, runoff, flood prediction

Procedia PDF Downloads 325
4092 Improved Intracellular Protein Degradation System for Rapid Screening and Quantitative Study of Essential Fungal Proteins in Biopharmaceutical Development

Authors: Patarasuda Chaisupa, R. Clay Wright

Abstract:

The selection of appropriate biomolecular targets is a crucial aspect of biopharmaceutical development. The Auxin-Inducible Degron Degradation (AID) technology has demonstrated remarkable potential in efficiently and rapidly degrading target proteins, thereby enabling the identification and acquisition of drug targets. The AID system also offers a viable method to deplete specific proteins, particularly in cases where the degradation pathway has not been exploited or when the adaptation of proteins, including the cell environment, occurs to compensate for the mutation or gene knockout. In this study, we have engineered an improved AID system tailored to deplete proteins of interest. This AID construct combines the auxin-responsive E3 ubiquitin ligase binding domain, AFB2, and the substrate degron, IAA17, fused to the target genes. Essential genes of fungi with the lowest percent amino acid similarity to human and plant orthologs, according to the Basic Local Alignment Search Tool (BLAST), were cloned into the AID construct in S. cerevisiae (AID-tagged strains) using a modular yeast cloning toolkit for multipart assembly and direct genetic modification. Each E3 ubiquitin ligase and IAA17 degron was fused to a fluorescence protein, allowing for real-time monitoring of protein levels in response to different auxin doses via cytometry. Our AID system exhibited high sensitivity, with an EC50 value of 0.040 µM (SE = 0.016) for AFB2, enabling the specific promotion of IAA17::target protein degradation. Furthermore, we demonstrate how this improved AID system enhances quantitative functional studies of various proteins in fungi. The advancements made in auxin-inducible protein degradation in this study offer a powerful approach to investigating critical target protein viability in fungi, screening protein targets for drugs, and regulating intracellular protein abundance, thus revolutionizing the study of protein function underlying a diverse range of biological processes.

Keywords: synthetic biology, bioengineering, molecular biology, biotechnology

Procedia PDF Downloads 77
4091 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 345
4090 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 211
4089 Cooling Profile Analysis of Hot Strip Coil Using Finite Volume Method

Authors: Subhamita Chakraborty, Shubhabrata Datta, Sujay Kumar Mukherjea, Partha Protim Chattopadhyay

Abstract:

Manufacturing of multiphase high strength steel in hot strip mill have drawn significant attention due to the possibility of forming low temperature transformation product of austenite under continuous cooling condition. In such endeavor, reliable prediction of temperature profile of hot strip coil is essential in order to accesses the evolution of microstructure at different location of hot strip coil, on the basis of corresponding Continuous Cooling Transformation (CCT) diagram. Temperature distribution profile of the hot strip coil has been determined by using finite volume method (FVM) vis-à-vis finite difference method (FDM). It has been demonstrated that FVM offer greater computational reliability in estimation of contact pressure distribution and hence the temperature distribution for curved and irregular profiles, owing to the flexibility in selection of grid geometry and discrete point position, Moreover, use of finite volume concept allows enforcing the conservation of mass, momentum and energy, leading to enhanced accuracy of prediction.

Keywords: simulation, modeling, thermal analysis, coil cooling, contact pressure, finite volume method

Procedia PDF Downloads 464
4088 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 129
4087 Artificial Neural Network Based Approach in Prediction of Potential Water Pollution Across Different Land-Use Patterns

Authors: M.Rüştü Karaman, İsmail İşeri, Kadir Saltalı, A.Reşit Brohi, Ayhan Horuz, Mümin Dizman

Abstract:

Considerable relations has recently been given to the environmental hazardous caused by agricultural chemicals such as excess fertilizers. In this study, a neural network approach was investigated in the prediction of potential nitrate pollution across different land-use patterns by using a feedforward multilayered computer model of artificial neural network (ANN) with proper training. Periodical concentrations of some anions, especially nitrate (NO3-), and cations were also detected in drainage waters collected from the drain pipes placed in irrigated tomato field, unirrigated wheat field, fallow and pasture lands. The soil samples were collected from the irrigated tomato field and unirrigated wheat field on a grid system with 20 m x 20 m intervals. Site specific nitrate concentrations in the soil samples were measured for ANN based simulation of nitrate leaching potential from the land profiles. In the application of ANN model, a multi layered feedforward was evaluated, and data sets regarding with training, validation and testing containing the measured soil nitrate values were estimated based on spatial variability. As a result of the testing values, while the optimal structures of 2-15-1 was obtained (R2= 0.96, P < 0.01) for unirrigated field, the optimal structures of 2-10-1 was obtained (R2= 0.96, P < 0.01) for irrigated field. The results showed that the ANN model could be successfully used in prediction of the potential leaching levels of nitrate, based on different land use patterns. However, for the most suitable results, the model should be calibrated by training according to different NN structures depending on site specific soil parameters and varied agricultural managements.

Keywords: artificial intelligence, ANN, drainage water, nitrate pollution

Procedia PDF Downloads 296
4086 The Use of Semantic Mapping Technique When Teaching English Vocabulary at Saudi Schools

Authors: Mohammed Hassan Alshaikhi

Abstract:

Vocabulary is essential factor of learning and mastering any languages, and it helps learners to communicate with others and to be understood. The aim of this study was to examine whether semantic mapping technique was helpful in terms of improving student's English vocabulary learning comparing to the traditional technique. The students’ age was between 11 and 13 years old. There were 60 students in total who participated in this study. 30 students were in the treatment group (target vocabulary items were taught with semantic mapping). The other 30 students were in the control group (the target vocabulary items were taught by a traditional technique). A t-test was used with the results of pre-test and post-test in order to examine the outcomes of using semantic mapping when teaching vocabulary. The results showed that the vocabulary mastery in the treatment group was increased more than the control group.

Keywords: English language, learning vocabulary, Saudi teachers, semantic mapping, teaching vocabulary strategies

Procedia PDF Downloads 237
4085 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 134
4084 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 302
4083 Comparison Between Two Techniques (Extended Source to Surface Distance & Field Alignment) Of Craniospinal Irradiation (CSI) In the Eclipse Treatment Planning System

Authors: Naima Jannat, Ariful Islam, Sharafat Hossain

Abstract:

Due to the involvement of the large target volume, Craniospinal Irradiation makes it challenging to achieve a uniform dose, and it requires different isocenters. This isocentric junction needs to shift after every five fractions to overcome the possibility of hot and cold spots. This study aims to evaluate the Planning Target Volume coverage & sparing Organ at Risk between two techniques and shows that the Field Alignment Technique does not need replanning and resetting. Planning method for Craniospinal Irradiation by Eclipse treatment planning system Field Alignment and Extended Source to Surface Distance technique was developed where 36 Gy in 20 Fraction at the rate of 1.8 Gy was prescribed. The patient was immobilized in the prone position. In the Field Alignment technique, the plan consists of half beam blocked parallel opposed cranium and a single posterior cervicospine field was developed by sharing the same isocenter, which obviates divergence matching. Further, a single field was created to treat the remaining lumbosacral spine. Matching between the inferior diverging edge of the cervicospine field and the superior diverging edge of a lumbosacral field, the field alignment option was used, which automatically matches the field edge divergence as per the field alignment rule in Eclipse Treatment Planning System where the couch was set to 2700. In the Extended Source to Surface Distance technique, two parallel opposed fields were created for the cranium, and a single posterior cervicospine field was created where the Source to Surface Distance was from 120-140 cm. Dose Volume Histograms were obtained for each organ contoured and for each technique used. In all, the patient’s maximum dose to Planning Target Volume is higher for the Extended Source to Surface Distance technique to Field Alignment technique. The dose to all surrounding structures was increased with the use of a single Extended Source to Surface Distance when compared to the Field Alignment technique. The average mean dose to Eye, Brain Steam, Kidney, Oesophagus, Heart, Liver, Lung, and Ovaries were respectively (58% & 60 %), (103% & 98%), (13% & 15%), (10% & 63%), (12% & 16%), (33% & 30%), (14% & 18%), (69% & 61%) for Field Alignment and Extended Source to Surface Distance technique. However, the clinical target volume at the spine junction site received a less homogeneous dose with the Field Alignment technique as compared to Extended Source to Surface Distance. We conclude that, although the use of a single field Extended Source to Surface Distance delivered a more homogenous, but its maximum dose is higher than the Field Alignment technique. Also, a huge advantage of the Field Alignment technique for Craniospinal Irradiation is that it doesn’t need replanning and resetting up of patients after every five fractions and 95% prescribed dose was received by more than 95% of the Planning Target Volume in all the plane with the acceptable hot spot.

Keywords: craniospinalirradiation, cranium, cervicospine, immobilize, lumbosacral spine

Procedia PDF Downloads 98