Search results for: Grey prediction model
16815 Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris
Authors: Piyush Samant, Ravinder Agarwal
Abstract:
Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.Keywords: complementary and alternative medicine, classification, iridology, iris, feature extraction, disease prediction
Procedia PDF Downloads 40816814 Numerical Simulations of the Transition Flow of Model Propellers for Predicting Open Water Performance
Authors: Huilan Yao, Huaixin Zhang
Abstract:
Simulations of the transition flow of model propellers are important for predicting hydrodynamic performance and studying scale effects. In this paper, the transition flow of a model propeller under different loadings are simulated using a transition model provided by STAR-CCM+, and the influence of turbulence intensity (TI) on the transition, especially friction and pressure components of propeller performance, was studied. Before that, the transition model was applied to simulate the transition flow of a flat plate and an airfoil. Predicted transitions agree well with experimental results. Then, the transition model was applied for propeller simulations in open water, and the influence of TI was studied. Under the heavy and moderate loadings, thrust and torque of the propeller predicted by the transition model (different TI) and two turbulence models are very close and agree well with measurements. However, under the light loading, only the transition model with low TI predicts the most accurate results. Above all, the friction components of propeller performance predicted by the transition model with different TI have obvious difference.Keywords: transition flow, model propellers, hydrodynamic performance, numerical simulation
Procedia PDF Downloads 26316813 Strategic Model of Implementing E-Learning Using Funnel Model
Authors: Mohamed Jama Madar, Oso Wilis
Abstract:
E-learning is the application of information technology in the teaching and learning process. This paper presents the Funnel model as a solution for the problems of implementation of e-learning in tertiary education institutions. While existing models such as TAM, theory-based e-learning and pedagogical model have been used over time, they have generally been found to be inadequate because of their tendencies to treat materials development, instructional design, technology, delivery and governance as separate and isolated entities. Yet it is matching components that bring framework of e-learning strategic implementation. The Funnel model enhances all these into one and applies synchronously and asynchronously to e-learning implementation where the only difference is modalities. Such a model for e-learning implementation has been lacking. The proposed Funnel model avoids ad-ad-hoc approach which has made other systems unused or inefficient, and compromised educational quality. Therefore, the proposed Funnel model should help tertiary education institutions adopt and develop effective and efficient e-learning system which meets users’ requirements.Keywords: e-learning, pedagogical, technology, strategy
Procedia PDF Downloads 45316812 A Comparative Study of Force Prediction Models during Static Bending Stage for 3-Roller Cone Frustum Bending
Authors: Mahesh Chudasama, Harit Raval
Abstract:
Conical sections and shells of metal plates manufactured by 3-roller conical bending process are widely used in the industries. The process is completed by first bending the metal plates statically and then dynamic roller bending sequentially. It is required to have an analytical model to get maximum bending force, for optimum design of the machine, for static bending stage. Analytical models assuming various stress conditions are considered and these analytical models are compared considering various parameters and reported in this paper. It is concluded from the study that for higher bottom roller inclination, the shear stress affects greatly to the static bending force whereas for lower bottom roller inclination it can be neglected.Keywords: roller-bending, static-bending, stress-conditions, analytical-modeling
Procedia PDF Downloads 25116811 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 9116810 Remote Sensing-Based Prediction of Asymptomatic Rice Blast Disease Using Hyperspectral Spectroradiometry and Spectral Sensitivity Analysis
Authors: Selvaprakash Ramalingam, Rabi N. Sahoo, Dharmendra Saraswat, A. Kumar, Rajeev Ranjan, Joydeep Mukerjee, Viswanathan Chinnasamy, K. K. Chaturvedi, Sanjeev Kumar
Abstract:
Rice is one of the most important staple food crops in the world. Among the various diseases that affect rice crops, rice blast is particularly significant, causing crop yield and economic losses. While the plant has defense mechanisms in place, such as chemical indicators (proteins, salicylic acid, jasmonic acid, ethylene, and azelaic acid) and resistance genes in certain varieties that can protect against diseases, susceptible varieties remain vulnerable to these fungal diseases. Early prediction of rice blast (RB) disease is crucial, but conventional techniques for early prediction are time-consuming and labor-intensive. Hyperspectral remote sensing techniques hold the potential to predict RB disease at its asymptomatic stage. In this study, we aimed to demonstrate the prediction of RB disease at the asymptomatic stage using non-imaging hyperspectral ASD spectroradiometer under controlled laboratory conditions. We applied statistical spectral discrimination theory to identify unknown spectra of M. Oryzae, the fungus responsible for rice blast disease. The infrared (IR) region was found to be significantly affected by RB disease. These changes may result in alterations in the absorption, reflection, or emission of infrared radiation by the affected plant tissues. Our research revealed that the protein spectrum in the IR region is impacted by RB disease. In our study, we identified strong correlations in the region (Amide group - I) around X 1064 nm and Y 1300 nm with the Lambda / Lambda derived spectra methods for protein detection. During the stages when the disease is developing, typically from day 3 to day 5, the plant's defense mechanisms are not as effective. This is especially true for the PB-1 variety of rice, which is highly susceptible to rice blast disease. Consequently, the proteins in the plant are adversely affected during this critical time. The spectral contour plot reveals the highly correlated spectral regions 1064 nm and Y 1300 nm associated with RB disease infection. Based on these spectral sensitivities, we developed new spectral disease indices for predicting different stages of disease emergence. The goal of this research is to lay the foundation for future UAV and satellite-based studies aimed at long-term monitoring of RB disease.Keywords: rice blast, asymptomatic stage, spectral sensitivity, IR
Procedia PDF Downloads 8716809 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach
Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené
Abstract:
Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.Keywords: agency theory, credit risk, internal controls, revised COSO framework
Procedia PDF Downloads 32116808 The Grand Unified Theory of Everything as a Generalization to the Standard Model Called as the General Standard Model
Authors: Amir Deljoo
Abstract:
The endeavor to comprehend the existence have been the center of thought for human in form of different disciplines and now basically in physics as the theory of everything. Here, after a brief review of the basic frameworks of thought, and a history of thought since ancient up to present, a logical methodology is presented based on a core axiom after which a function, a proto-field and then a coordinates are explained. Afterwards a generalization to Standard Model is proposed as General Standard Model which is believed to be the base of the Unified Theory of Everything.Keywords: general relativity, grand unified theory, quantum mechanics, standard model, theory of everything
Procedia PDF Downloads 10116807 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 16916806 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 16016805 Definition of a Computing Independent Model and Rules for Transformation Focused on the Model-View-Controller Architecture
Authors: Vanessa Matias Leite, Jandira Guenka Palma, Flávio Henrique de Oliveira
Abstract:
This paper presents a model-oriented development approach to software development in the Model-View-Controller (MVC) architectural standard. This approach aims to expose a process of extractions of information from the models, in which through rules and syntax defined in this work, assists in the design of the initial model and its future conversions. The proposed paper presents a syntax based on the natural language, according to the rules agreed in the classic grammar of the Portuguese language, added to the rules of conversions generating models that follow the norms of the Object Management Group (OMG) and the Meta-Object Facility MOF.Keywords: BNF Syntax, model driven architecture, model-view-controller, transformation, UML
Procedia PDF Downloads 39516804 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 14516803 Mathematical Modeling and Optimization of Burnishing Parameters for 15NiCr6 Steel
Authors: Tarek Litim, Ouahiba Taamallah
Abstract:
The present paper is an investigation of the effect of burnishing on the surface integrity of a component made of 15NiCr6 steel. This work shows a statistical study based on regression, and Taguchi's design has allowed the development of mathematical models to predict the output responses as a function of the technological parameters studied. The response surface methodology (RSM) showed a simultaneous influence of the burnishing parameters and observe the optimal processing parameters. ANOVA analysis of the results resulted in the validation of the prediction model with a determination coefficient R=90.60% and 92.41% for roughness and hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=10kgf, i=3passes, and f=0.074mm/rev, which favours minimum roughness and maximum hardness. The result was validated by the desirability of D= (0.99 and 0.95) for roughness and hardness, respectively.Keywords: 15NiCr6 steel, burnishing, surface integrity, Taguchi, RSM, ANOVA
Procedia PDF Downloads 19416802 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent
Authors: Faidon Kyriakou, William Dempster, David Nash
Abstract:
Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.Keywords: AAA, efficiency, finite element analysis, stent deployment
Procedia PDF Downloads 19316801 1D Velocity Model for the Gobi-Altai Region from Local Earthquakes
Authors: Dolgormaa Munkhbaatar, Munkhsaikhan Adiya, Tseedulam Khuut
Abstract:
We performed an inversion method to determine the 1D-velocity model with station corrections of the Gobi-Altai area in the southern part of Mongolia using earthquake data collected in the National Data Center during the last 10 years. In this study, the concept of the new 1D model has been employed to minimize the average RMS of a set of well-located earthquakes, recorded at permanent (between 2006 and 2016) and temporary seismic stations (between 2014 and 2016), compute solutions for the coupled hypocenter and 1D velocity model. We selected 4800 events with RMS less than 0.5 seconds and with a maximum GAP of 170 degrees and determined velocity structures. Also, we relocated all possible events located in the Gobi-Altai area using the new 1D velocity model and achieved constrained hypocentral determinations for events within this area. We concluded that the estimated new 1D velocity model is a relatively low range compared to the previous velocity model in a significant improvement intend to, and the quality of the information basis for future research center locations to determine the earthquake epicenter area with this new transmission model.Keywords: 1D velocity model, earthquake, relocation, Velest
Procedia PDF Downloads 16816800 An Approach for Thermal Resistance Prediction of Plain Socks in Wet State
Authors: Tariq Mansoor, Lubos Hes, Vladimir Bajzik
Abstract:
Socks comfort has great significance in our daily life. This significance even increased when we have undergone a work of low or high activity. It causes the sweating of our body with different rates. In this study, plain socks with differential fibre composition were wetted to saturated level. Then after successive intervals of conditioning, these socks are characterized by thermal resistance in dry and wet states. Theoretical thermal resistance is predicted by using combined filling coefficients and thermal conductivity of wet polymers instead of dry polymer (fibre) in different models. By this modification, different mathematical models could predict thermal resistance at different moisture levels. Furthermore, predicted thermal resistance by different models has reasonable correlation range between (0.84 -0.98) with experimental results in both dry (lab conditions moisture) and wet states. "This work is supported by Technical University of Liberec under SGC-2019. Project number is 21314".Keywords: thermal resistance, mathematical model, plain socks, moisture loss rate
Procedia PDF Downloads 19916799 Experimental and Theoratical Methods to Increase Core Damping for Sandwitch Cantilever Beam
Authors: Iyd Eqqab Maree, Moouyad Ibrahim Abbood
Abstract:
The purpose behind this study is to predict damping effect for steel cantilever beam by using two methods of passive viscoelastic constrained layer damping. First method is Matlab Program, this method depend on the Ross, Kerwin and Unger (RKU) model for passive viscoelastic damping. Second method is experimental lab (frequency domain method), in this method used the half-power bandwidth method and can be used to determine the system loss factors for damped steel cantilever beam. The RKU method has been applied to a cantilever beam because beam is a major part of a structure and this prediction may further leads to utilize for different kinds of structural application according to design requirements in many industries. In this method of damping a simple cantilever beam is treated by making sandwich structure to make the beam damp, and this is usually done by using viscoelastic material as a core to ensure the damping effect. The use of viscoelastic layers constrained between elastic layers is known to be effective for damping of flexural vibrations of structures over a wide range of frequencies. The energy dissipated in these arrangements is due to shear deformation in the viscoelastic layers, which occurs due to flexural vibration of the structures. The theory of dynamic stability of elastic systems deals with the study of vibrations induced by pulsating loads that are parametric with respect to certain forms of deformation. There is a very good agreement of the experimental results with the theoretical findings. The main ideas of this thesis are to find the transition region for damped steel cantilever beam (4mm and 8mm thickness) from experimental lab and theoretical prediction (Matlab R2011a). Experimentally and theoretically proved that the transition region for two specimens occurs at modal frequency between mode 1 and mode 2, which give the best damping, maximum loss factor and maximum damping ratio, thus this type of viscoelastic material core (3M468) is very appropriate to use in automotive industry and in any mechanical application has modal frequency eventuate between mode 1 and mode 2.Keywords: 3M-468 material core, loss factor and frequency, domain method, bioinformatics, biomedicine, MATLAB
Procedia PDF Downloads 27216798 Structural Strength Evaluation and Wear Prediction of Double Helix Steel Wire Ropes for Heavy Machinery
Authors: Krunal Thakar
Abstract:
Wire ropes combine high tensile strength and flexibility as compared to other general steel products. They are used in various application areas such as cranes, mining, elevators, bridges, cable cars, etc. The earliest reported use of wire ropes was for mining hoist application in 1830s. Over the period, there have been substantial advancement in the design of wire ropes for various application areas. Under operational conditions, wire ropes are subjected to varying tensile loads and bending loads resulting in material wear and eventual structural failure due to fretting fatigue. The conventional inspection methods to determine wire failure is only limited to outer wires of rope. However, till date, there is no effective mathematical model to examine the inter wire contact forces and wear characteristics. The scope of this paper is to present a computational simulation technique to evaluate inter wire contact forces and wear, which are in many cases responsible for rope failure. Two different type of ropes, IWRC-6xFi(29) and U3xSeS(48) were taken for structural strength evaluation and wear prediction. Both ropes have a double helix twisted wire profile as per JIS standards and are mainly used in cranes. CAD models of both ropes were developed in general purpose design software using in house developed formulation to generate double helix profile. Numerical simulation was done under two different load cases (a) Axial Tension and (b) Bending over Sheave. Different parameters such as stresses, contact forces, wear depth, load-elongation, etc., were investigated and compared between both ropes. Numerical simulation method facilitates the detailed investigation of inter wire contact and wear characteristics. In addition, various selection parameters like sheave diameter, rope diameter, helix angle, swaging, maximum load carrying capacity, etc., can be quickly analyzed.Keywords: steel wire ropes, numerical simulation, material wear, structural strength, axial tension, bending over sheave
Procedia PDF Downloads 15216797 Blockchain Technology in Supply Chain Management: A Systematic Review And Meta-Analysis
Authors: Mohammad Yousuf Khan, Bhavya Alankar
Abstract:
Blockchain is a promising technology with its features such as immutability and decentralized database. It has applications in various fields such as pharmaceutical, finance, & the food industry. At the core of its heart lies its feature, traceability which is the most desired key in supply chains. However, supply chains have always been hit rock bottom by scandals and controversies. In this review paper, we have explored the advancement and research gaps of blockchain technology (BT) in supply chain management (SCM). We have used the Prisma framework for systematic literature review (SLR) and included a minuscule amount of grey literature to reduce publication bias. We found that supply chain traceability and transparency is the most researched objective in SCM. There was hardly any research in supply chain resilience. Further, we found that 40 % of the papers were application based. Most articles have focused on the advantages of BT, rather than analyzing it critically. This study will help identify gaps and suitable actions to be followed for an efficient implementation of BT in SCM.Keywords: blockchain technology, supply chain management, supply chain transparency, supply chain resilience
Procedia PDF Downloads 16216796 An Elbow Biomechanical Model and Its Coefficients Adjustment
Authors: Jie Bai, Yongsheng Gao, Shengxin Wang, Jie Zhao
Abstract:
Through the establishment of the elbow biomechanical model, it can provide theoretical guide for rehabilitation therapy on the upper limb of the human body. A biomechanical model of the elbow joint can be built by the connection of muscle force model and elbow dynamics. But there are many undetermined coefficients in the model like the optimal joint angle and optimal muscle force which are usually specified as the experimental parameters of other workers. Because of the individual differences, there is a certain deviation of the final result. To this end, the RMS value of the deviation between the actual angle and calculated angle is considered. A set of coefficients which lead to the minimum RMS value will be chosen to be the optimal parameters. The direct search method and the conjugacy search method are used to get the optimal parameters, thus the model can be more accurate and mode adaptability.Keywords: elbow biomechanical model, RMS, direct search, conjugacy search
Procedia PDF Downloads 55016795 Forecasting for Financial Stock Returns Using a Quantile Function Model
Authors: Yuzhi Cai
Abstract:
In this paper, we introduce a newly developed quantile function model that can be used for estimating conditional distributions of financial returns and for obtaining multi-step ahead out-of-sample predictive distributions of financial returns. Since we forecast the whole conditional distributions, any predictive quantity of interest about the future financial returns can be obtained simply as a by-product of the method. We also show an application of the model to the daily closing prices of Dow Jones Industrial Average (DJIA) series over the period from 2 January 2004 - 8 October 2010. We obtained the predictive distributions up to 15 days ahead for the DJIA returns, which were further compared with the actually observed returns and those predicted from an AR-GARCH model. The results show that the new model can capture the main features of financial returns and provide a better fitted model together with improved mean forecasts compared with conventional methods. We hope this talk will help audience to see that this new model has the potential to be very useful in practice.Keywords: DJIA, financial returns, predictive distribution, quantile function model
Procedia PDF Downloads 36716794 Surveying Energy Dissipation in Stepped Spillway Using Finite Element Modeling
Authors: Mehdi Fuladipanah
Abstract:
Stepped spillway includes several steps from the crest to the toe. The steps of stepped spillway could cause to decrease the energy with making energy distribution in the longitude mode and also to reduce the outcome speed. The aim of this study was to stimulate the stepped spillway combined with stilling basin-step using Fluent model and the turbulent superficial flow using RNG, K-ε. The free surface of the flow was monitored by VOF model. The velocity and the depth of the flow were measured by tail water depth by the numerical model and then the dissipated energy was calculated along the spillway. The results indicated that the stilling basin-step complex may cause energy dissipation increment in the stepped spillway. Also, the numerical model was suggested as an effective method to predict the circular and complicated flows in the stepped spillways.Keywords: stepped spillway, fluent model, VOF model, K-ε model, energy distribution
Procedia PDF Downloads 37216793 Monitoring Three-Dimensional Models of Tree and Forest by Using Digital Close-Range Photogrammetry
Authors: S. Y. Cicekli
Abstract:
In this study, tree-dimensional model of tree was created by using terrestrial close range photogrammetry. For this close range photos were taken. Photomodeler Pro 5 software was used for camera calibration and create three-dimensional model of trees. In first test, three-dimensional model of a tree was created, in the second test three-dimensional model of three trees were created. This study aim is creating three-dimensional model of trees and indicate the use of close-range photogrammetry in forestry. At the end of the study, three-dimensional model of tree and three trees were created. This study showed that usability of close-range photogrammetry for monitoring tree and forests three-dimensional model.Keywords: close- range photogrammetry, forest, tree, three-dimensional model
Procedia PDF Downloads 38916792 A Mathematical-Based Formulation of EEG Fluctuations
Authors: Razi Khalafi
Abstract:
Brain is the information processing center of the human body. Stimuli in form of information are transferred to the brain and then brain makes the decision on how to respond to them. In this research we propose a new partial differential equation which analyses the EEG signals and make a relationship between the incoming stimuli and the brain response to them. In order to test the proposed model, a set of external stimuli applied to the model and the model’s outputs were checked versus the real EEG data. The results show that this model can model the EEG signal well. The proposed model is useful not only for modeling of the EEG signal in case external stimuli but it can be used for the modeling of brain response in case of internal stimuli.Keywords: Brain, stimuli, partial differential equation, response, eeg signal
Procedia PDF Downloads 43416791 Performance and Availability Analysis of 2N Redundancy Models
Authors: Yutae Lee
Abstract:
In this paper, we consider the performance and availability of a redundancy model. The redundancy model is a form of resilience that ensures service availability in the event of component failure. This paper considers a 2N redundancy model. In the model there are at most one active service unit and at most one standby service unit. The active one is providing the service while the standby is prepared to take over the active role when the active fails. We design our analysis model using Stochastic Reward Nets, and then evaluate the performance and availability of 2N redundancy model using Stochastic Petri Net Package (SPNP).Keywords: availability, performance, stochastic reward net, 2N redundancy
Procedia PDF Downloads 42116790 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 20716789 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching
Authors: Gianna Zou
Abstract:
Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.Keywords: BART, Bayesian, matching, regression
Procedia PDF Downloads 14916788 A Mathematical Equation to Calculate Stock Price of Different Growth Model
Authors: Weiping Liu
Abstract:
This paper presents an equation to calculate stock prices of different growth model. This equation is mathematically derived by using discounted cash flow method. It has the advantages of being very easy to use and very accurate. It can still be used even when the first stage is lengthy. This equation is more generalized because it can be used for all the three popular stock price models. It can be programmed into financial calculator or electronic spreadsheets. In addition, it can be extended to a multistage model. It is more versatile and efficient than the traditional methods.Keywords: stock price, multistage model, different growth model, discounted cash flow method
Procedia PDF Downloads 40716787 Noise Removal Techniques in Medical Images
Authors: Amhimmid Mohammed Saffour, Abdelkader Salama
Abstract:
Filtering is a part of image enhancement techniques, it is used to enhance certain details such as edges in the image that are relevant to the application. Additionally, filtering can even be used to eliminate unwanted components of noise. Medical images typically contain salt and pepper noise and Poisson noise. This noise appears to the presence of minute grey scale variations within the image. In this paper, different filters techniques namely (Median, Wiener, Rank order3, Rank order5, and Average) were applied on CT medical images (Brain and chest). We using all these filters to remove salt and pepper noise from these images. This type of noise consists of random pixels being set to black or white. Peak Signal to Noise Ratio (PSNR), Mean Square Error r(MSE) and Histogram were used to evaluated the quality of filtered images. The results, which we have achieved shows that, these filters, are more useful and they prove to be helpful for general medical practitioners to analyze the symptoms of the patients with no difficulty.Keywords: CT imaging, median filter, adaptive filter and average filter, MATLAB
Procedia PDF Downloads 31316786 A Comparison between the Results of Hormuz Strait Wave Simulations Using WAVEWATCH-III and MIKE21-SW and Satellite Altimetry Observations
Authors: Fatemeh Sadat Sharifi
Abstract:
In the present study, the capabilities of WAVEWATCH-III and MIKE21-SW for predicting the characteristics of wind waves in Hormuz Strait are evaluated. The GFS wind data (Global Forecast System) were derived. The bathymetry of gride with 2 arc-minute resolution, also were extracted from the ETOPO1. WAVEWATCH-III findings illustrate more valid prediction of wave features comparing to the MIKE-21 SW in deep water. Apparently, in shallow area, the MIKE-21 provides more uniformities with altimetry measurements. This may be due to the merits of the unstructured grid which are used in MIKE-21, leading to better representations of the coastal area. The findings on the direction of waves generated by wind in the modeling area indicate that in some regions, despite the increase in wind speed, significant wave height stays nearly unchanged. This is fundamental because of swift changes in wind track over the Strait of Hormuz. After discussing wind-induced waves in the region, the impact of instability of the surface layer on wave growth has been considered. For this purpose, the average monthly mean air temperature has been used. The results in cold months, when the surface layer is unstable, indicates an acceptable increase in the accuracy of prediction of the indicator wave height.Keywords: numerical modeling, WAVEWATCH-III, Strait of Hormuz, MIKE21-SW
Procedia PDF Downloads 208