Search results for: generalised linear model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18811

Search results for: generalised linear model

18031 Application of Neural Network on the Loading of Copper onto Clinoptilolite

Authors: John Kabuba

Abstract:

The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ion-exchange.

Keywords: clinoptilolite, loading, modeling, neural network

Procedia PDF Downloads 415
18030 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics

Authors: Anas H. Aljemely, Jianping Xuan

Abstract:

Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.

Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features

Procedia PDF Downloads 210
18029 Commutativity of Fractional Order Linear Time-Varying Systems

Authors: Salisu Ibrahim

Abstract:

The paper studies the commutativity associated with fractional order linear time-varying systems (LTVSs), which is an important area of study in control systems engineering. In this paper, we explore the properties of these systems and their ability to commute. We proposed the necessary and sufficient condition for commutativity for fractional order LTVSs. Through a simulation and mathematical analysis, we demonstrate that these systems exhibit commutativity under certain conditions. Our findings have implications for the design and control of fractional order systems in practical applications, science, and engineering. An example is given to show the effectiveness of the proposed method which is been computed by Mathematica and validated by the use of MATLAB (Simulink).

Keywords: fractional differential equation, physical systems, equivalent circuit, analog control

Procedia PDF Downloads 114
18028 A One-Dimensional Model for Contraction in Burn Wounds: A Sensitivity Analysis and a Feasibility Study

Authors: Ginger Egberts, Fred Vermolen, Paul van Zuijlen

Abstract:

One of the common complications in post-burn scars is contractions. Depending on the extent of contraction and the wound dimensions, the contracture can cause a limited range-of-motion of joints. A one-dimensional morphoelastic continuum hypothesis-based model describing post-burn scar contractions is considered. The beauty of the one-dimensional model is the speed; hence it quickly yields new results and, therefore, insight. This model describes the movement of the skin and the development of the strain present. Besides these mechanical components, the model also contains chemical components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, the so-called signaling molecules, and collagen. The dermal layer is modeled as an isotropic morphoelastic solid, and pulling forces are generated by myofibroblasts. The solution to the model equations is approximated by the finite-element method using linear basis functions. One of the major challenges in biomechanical modeling is the estimation of parameter values. Therefore, this study provides a comprehensive description of skin mechanical parameter values and a sensitivity analysis. Further, since skin mechanical properties change with aging, it is important that the model is feasible for predicting the development of contraction in burn patients of different ages, and hence this study provides a feasibility study. The variability in the solutions is caused by varying the values for some parameters simultaneously over the domain of computation, for which the results of the sensitivity analysis are used. The sensitivity analysis shows that the most sensitive parameters are the equilibrium concentration of collagen, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. This suggests that most of the variability in the evolution of contraction in burns in patients of different ages might be caused mostly by the decreasing equilibrium of collagen concentration. As expected, the feasibility study shows this model can be used to show distinct extents of contractions in burns in patients of different ages. Nevertheless, contraction formation in children differs from contraction formation in adults because of the growth. This factor has not been incorporated in the model yet, and therefore the feasibility results for children differ from what is seen in the clinic.

Keywords: biomechanics, burns, feasibility, fibroblasts, morphoelasticity, sensitivity analysis, skin mechanics, wound contraction

Procedia PDF Downloads 160
18027 Using Machine Learning to Predict Answers to Big-Five Personality Questions

Authors: Aadityaa Singla

Abstract:

The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.

Keywords: machine learning, personally, big five personality traits, cognitive science

Procedia PDF Downloads 145
18026 Eco-Design of Construction Industrial Park in China with Selection of Candidate Tenants

Authors: Yang Zhou, Kaijian Li, Guiwen Liu

Abstract:

Offsite construction is an innovative alternative to conventional site-based construction, with wide-ranging benefits. It requires building components, elements or modules were prefabricated and pre-assembly before installed into their final locations. To improve efficiency and achieve synergies, in recent years, construction companies were clustered into construction industrial parks (CIPs) in China. A CIP is a community of construction manufacturing and service businesses located together on a common property. Companies involved in industrial clusters can obtain environment and economic benefits by sharing resources and information in a given region. Therefore, the concept of industrial symbiosis (IS) can be applied to the traditional CIP to achieve sustainable industrial development or redevelopment through the implementation of eco-industrial parks (EIP). However, before designing a symbiosis network between companies in a CIP, candidate support tenants need to be selected to complement the existing construction companies. In this study, an access indicator system and a linear programming model are established to select candidate tenants in a CIP while satisfying the degree of connectivity among the enterprises in the CIP, minimizing the environmental impact, and maximizing the annualized profit of the CIP. The access indicator system comprises three primary indicators and fifteen secondary indicators, is proposed from the perspective of park-based level. The fifteen indicators are classified as three primary indicators including industrial symbiosis, environment performance and economic benefit, according to the three dimensions of sustainability (environment, economic and social dimensions) and the three R's of the environment (reduce, reuse and recycle). The linear programming model is a method to assess the satisfactoriness of all the indicators and to make an optimal multi-objective selection among candidate tenants. This method provides a practical tool for planners of a CIP in evaluating which among the candidate tenants would best complement existing anchor construction tenants. The reasonability and validity of the indicator system and the method is worth further study in the future.

Keywords: construction industrial park, China, industrial symbiosis, offsite construction, selection of support tenants

Procedia PDF Downloads 274
18025 Ethnic Identity as an Asset: Linking Ethnic Identity, Perceived Social Support, and Mental Health among Indigenous Adults in Taiwan

Authors: A.H.Y. Lai, C. Teyra

Abstract:

In Taiwan, there are 16 official indigenous groups, accounting for 2.3% of the total population. Like other indigenous populations worldwide, indigenous peoples in Taiwan have poorer mental health because of their history of oppression and colonisation. Amid the negative narratives, the ethnic identity of cultural minorities is their unique psychological and cultural asset. Moreover, positive socialisation is found to be related to strong ethnic identity. Based on Phinney’s theory on ethnic identity development and social support theory, this study adopted a strength-based approach conceptualising ethnic identity as the central organising principle that linked perceived social support and mental health among indigenous adults in Taiwan. Aims. Overall aim is to examine the effect of ethnic identity and social support on mental health. Specific aims were to examine : (1) the association between ethnic identity and mental health; (2) the association between perceived social support and mental health ; (3) the indirect effect of ethnic identity linking perceived social support and mental health. Methods. Participants were indigenous adults in Taiwan (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Respondent-driven sampling was used. Standardised measurements were: Ethnic Identity Scale(6-item); Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender and economic satisfaction. A four-stage structural equation modelling (SEM) with robust maximin likelihood estimation was employed using Mplus8.0. Step 1: A measurement model was built and tested using confirmatory factor analysis (CFA). Step 2: Factor covariates were re-specified as direct effects in the SEM. Covariates were added. The direct effects of (1) ethnic identity and social support on depression and anxiety and (2) social support on ethnic identity were tested. The indirect effect of ethnic identity was examined with the bootstrapping technique. Results. The CFA model showed satisfactory fit statistics: x^2(df)=869.69(608), p<.05; Comparative ft index (CFI)/ Tucker-Lewis fit index (TLI)=0.95/0.94; root mean square error of approximation (RMSEA)=0.05; Standardized Root Mean Squared Residual (SRMR)=0.05. Ethnic identity is represented by two latent factors: ethnic identity-commitment and ethnic identity-exploration. Depression, anxiety and social support are single-factor latent variables. For the SEM, model fit statistics were: x^2(df)=779.26(527), p<.05; CFI/TLI=0.94/0.93; RMSEA=0.05; SRMR=0.05. Ethnic identity-commitment (b=-0.30) and social support (b=-0.33) had direct negative effects on depression, but ethnic identity-exploration did not. Ethnic identity-commitment (b=-0.43) and social support (b=-0.31) had direct negative effects on anxiety, while identity-exploration (b=0.24) demonstrated a positive effect. Social support had direct positive effects on ethnic identity-exploration (b=0.26) and ethnic identity-commitment (b=0.31). Mediation analysis demonstrated the indirect effect of ethnic identity-commitment linking social support and depression (b=0.22). Implications: Results underscore the role of social support in preventing depression via ethnic identity commitment among indigenous adults in Taiwan. Adopting the strength-based approach, mental health practitioners can mobilise indigenous peoples’ commitment to their group to promote their well-being.

Keywords: ethnic identity, indigenous population, mental health, perceived social support

Procedia PDF Downloads 103
18024 Multiscale Modelling of Citrus Black Spot Transmission Dynamics along the Pre-Harvest Supply Chain

Authors: Muleya Nqobile, Winston Garira

Abstract:

We presented a compartmental deterministic multi-scale model which encompass internal plant defensive mechanism and pathogen interaction, then we consider nesting the model into the epidemiological model. The objective was to improve our understanding of the transmission dynamics of within host and between host of Guignardia citricapa Kiely. The inflow of infected class was scaled down to individual level while the outflow was scaled up to average population level. Conceptual model and mathematical model were constructed to display a theoretical framework which can be used for predicting or identify disease pattern.

Keywords: epidemiological model, mathematical modelling, multi-scale modelling, immunological model

Procedia PDF Downloads 458
18023 Commutativity of Fractional Order Linear Time-Varying System

Authors: Salisu Ibrahim

Abstract:

The paper studies the commutativity associated with fractional order linear time-varying systems (LTVSs), which is an important area of study in control systems engineering. In this paper, we explore the properties of these systems and their ability to commute. We proposed the necessary and sufficient condition for commutativity for fractional order LTVSs. Through a simulation and mathematical analysis, we demonstrate that these systems exhibit commutativity under certain conditions. Our findings have implications for the design and control of fractional order systems in practical applications, science, and engineering. An example is given to show the effectiveness of the proposed method which is been computed by Mathematica and validated by the use of Matlab (Simulink).

Keywords: fractional differential equation, physical systems, equivalent circuit, and analog control

Procedia PDF Downloads 77
18022 The Asymmetric Proximal Support Vector Machine Based on Multitask Learning for Classification

Authors: Qing Wu, Fei-Yan Li, Heng-Chang Zhang

Abstract:

Multitask learning support vector machines (SVMs) have recently attracted increasing research attention. Given several related tasks, the single-task learning methods trains each task separately and ignore the inner cross-relationship among tasks. However, multitask learning can capture the correlation information among tasks and achieve better performance by training all tasks simultaneously. In addition, the asymmetric squared loss function can better improve the generalization ability of the models on the most asymmetric distributed data. In this paper, we first make two assumptions on the relatedness among tasks and propose two multitask learning proximal support vector machine algorithms, named MTL-a-PSVM and EMTL-a-PSVM, respectively. MTL-a-PSVM seeks a trade-off between the maximum expectile distance for each task model and the closeness of each task model to the general model. As an extension of the MTL-a-PSVM, EMTL-a-PSVM can select appropriate kernel functions for shared information and private information. Besides, two corresponding special cases named MTL-PSVM and EMTLPSVM are proposed by analyzing the asymmetric squared loss function, which can be easily implemented by solving linear systems. Experimental analysis of three classification datasets demonstrates the effectiveness and superiority of our proposed multitask learning algorithms.

Keywords: multitask learning, asymmetric squared loss, EMTL-a-PSVM, classification

Procedia PDF Downloads 133
18021 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 149
18020 Delay-Independent Closed-Loop Stabilization of Neutral System with Infinite Delays

Authors: Iyai Davies, Olivier L. C. Haas

Abstract:

In this paper, the problem of stability and stabilization for neutral delay-differential systems with infinite delay is investigated. Using Lyapunov method, new delay-independent sufficient condition for the stability of neutral systems with infinite delay is obtained in terms of linear matrix inequality (LMI). Memory-less state feedback controllers are then designed for the stabilization of the system using the feasible solution of the resulting LMI, which are easily solved using any optimization algorithms. Numerical examples are given to illustrate the results of the proposed methods.

Keywords: infinite delays, Lyapunov method, linear matrix inequality, neutral systems, stability

Procedia PDF Downloads 431
18019 Engineering Optimization of Flexible Energy Absorbers

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Elastic energy absorbers which consist of a ring-liked plate and springs can be a good choice for increasing the impact duration during an accident. In the current project, an energy absorber system is optimized using four optimizing methods Kuhn-Tucker, Sequential Linear Programming (SLP), Concurrent Subspace Design (CSD), and Pshenichny-Lim-Belegundu-Arora (PLBA). Time solution, convergence, Programming Length and accuracy of the results were considered to find the best solution algorithm. Results showed the superiority of PLBA over the other algorithms.

Keywords: Concurrent Subspace Design (CSD), Kuhn-Tucker, Pshenichny-Lim-Belegundu-Arora (PLBA), Sequential Linear Programming (SLP)

Procedia PDF Downloads 399
18018 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
18017 Proposal for a Generic Context Meta-Model

Authors: Jaouadi Imen, Ben Djemaa Raoudha, Ben Abdallah Hanene

Abstract:

The access to relevant information that is adapted to users’ needs, preferences and environment is a challenge in many applications running. That causes an appearance of context-aware systems. To facilitate the development of this class of applications, it is necessary that these applications share a common context meta-model. In this article, we will present our context meta-model that is defined using the OMG Meta Object facility (MOF). This meta-model is based on the analysis and synthesis of context concepts proposed in literature.

Keywords: context, meta-model, MOF, awareness system

Procedia PDF Downloads 560
18016 Numerical Model of Low Cost Rubber Isolators for Masonry Housing in High Seismic Regions

Authors: Ahmad B. Habieb, Gabriele Milani, Tavio Tavio, Federico Milani

Abstract:

Housings in developing countries have often inadequate seismic protection, particularly for masonry. People choose this type of structure since the cost and application are relatively cheap. Seismic protection of masonry remains an interesting issue among researchers. In this study, we develop a low-cost seismic isolation system for masonry using fiber reinforced elastomeric isolators. The elastomer proposed consists of few layers of rubber pads and fiber lamina, making it lower in cost comparing to the conventional isolators. We present a finite element (FE) analysis to predict the behavior of the low cost rubber isolators undergoing moderate deformations. The FE model of the elastomer involves a hyperelastic material property for the rubber pad. We adopt a Yeoh hyperelasticity model and estimate its coefficients through the available experimental data. Having the shear behavior of the elastomers, we apply that isolation system onto small masonry housing. To attach the isolators on the building, we model the shear behavior of the isolation system by means of a damped nonlinear spring model. By this attempt, the FE analysis becomes computationally inexpensive. Several ground motion data are applied to observe its sensitivity. Roof acceleration and tensile damage of walls become the parameters to evaluate the performance of the isolators. In this study, a concrete damage plasticity model is used to model masonry in the nonlinear range. This tool is available in the standard package of Abaqus FE software. Finally, the results show that the low-cost isolators proposed are capable of reducing roof acceleration and damage level of masonry housing. Through this study, we are also capable of monitoring the shear deformation of isolators during seismic motion. It is useful to determine whether the isolator is applicable. According to the results, the deformations of isolators on the benchmark one story building are relatively small.

Keywords: masonry, low cost elastomeric isolator, finite element analysis, hyperelasticity, damped non-linear spring, concrete damage plasticity

Procedia PDF Downloads 286
18015 Pharmaceutical Applications of Newton's Second Law and Disc Inertia

Authors: Nicholas Jensen

Abstract:

As the effort to create new drugs to treat rare conditions cost-effectively intensifies, there is a need to ensure maximum efficiency in the manufacturing process. This includes the creation of ultracompact treatment forms, which can best be achieved via applications of fundamental laws of physics. This paper reports an experiment exploring the relationship between the forms of Newton's 2ⁿᵈ Law appropriate to linear motion and to transversal architraves. The moment of inertia of three discs was determined by experiments and compared with previous data derived from a theoretical relationship. The method used was to attach the discs to a moment arm. Comparing the results with those obtained from previous experiments, it is found to be consistent with the first law of thermodynamics. It was further found that Newton's 2ⁿᵈ law violates the second law of thermodynamics. The purpose of this experiment was to explore the relationship between the forms of Newton's 2nd Law appropriate to linear motion and to apply torque to a twisting force, which is determined by position vector r and force vector F. Substituting equation alpha in place of beta; angular acceleration is a linear acceleration divided by radius r of the moment arm. The nevrological analogy of Newton's 2nd Law states that these findings can contribute to a fuller understanding of thermodynamics in relation to viscosity. Implications for the pharmaceutical industry will be seen to be fruitful from these findings.

Keywords: Newtonian physics, inertia, viscosity, pharmaceutical applications

Procedia PDF Downloads 117
18014 CFD Analysis of an Aft Sweep Wing in Subsonic Flow and Making Analogy with Roskam Methods

Authors: Ehsan Sakhaei, Ali Taherabadi

Abstract:

In this study, an aft sweep wing with specific characteristic feature was analysis with CFD method in Fluent software. In this analysis wings aerodynamic coefficient was calculated in different rake angle and wing lift curve slope to rake angle was achieved. Wing section was selected among NACA airfoils version 6. The sweep angle of wing is 15 degree, aspect ratio 8 and taper ratios 0.4. Designing and modeling this wing was done in CATIA software. This model was meshed in Gambit software and its three dimensional analysis was done in Fluent software. CFD methods used here were based on pressure base algorithm. SIMPLE technique was used for solving Navier-Stokes equation and Spalart-Allmaras model was utilized to simulate three dimensional wing in air. Roskam method is one of the common and most used methods for determining aerodynamics parameters in the field of airplane designing. In this study besides CFD analysis, an advanced aircraft analysis was used for calculating aerodynamic coefficient using Roskam method. The results of CFD were compared with measured data acquired from Roskam method and authenticity of relation was evaluated. The results and comparison showed that in linear region of lift curve there is a minor difference between aerodynamics parameter acquired from CFD to relation present by Roskam.

Keywords: aft sweep wing, CFD method, fluent, Roskam, Spalart-Allmaras model

Procedia PDF Downloads 504
18013 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 174
18012 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 142
18011 Application of Artificial Neural Network for Prediction of Retention Times of Some Secoestrane Derivatives

Authors: Nataša Kalajdžija, Strahinja Kovačević, Davor Lončar, Sanja Podunavac Kuzmanović, Lidija Jevrić

Abstract:

In order to investigate the relationship between retention and structure, a quantitative Structure Retention Relationships (QSRRs) study was applied for the prediction of retention times of a set of 23 secoestrane derivatives in a reversed-phase thin-layer chromatography. After the calculation of molecular descriptors, a suitable set of molecular descriptors was selected by using step-wise multiple linear regressions. Artificial Neural Network (ANN) method was employed to model the nonlinear structure-activity relationships. The ANN technique resulted in 5-6-1 ANN model with the correlation coefficient of 0.98. We found that the following descriptors: Critical pressure, total energy, protease inhibition, distribution coefficient (LogD) and parameter of lipophilicity (miLogP) have a significant effect on the retention times. The prediction results are in very good agreement with the experimental ones. This approach provided a new and effective method for predicting the chromatographic retention index for the secoestrane derivatives investigated.

Keywords: lipophilicity, QSRR, RP TLC retention, secoestranes

Procedia PDF Downloads 454
18010 Determining the Factors Affecting Social Media Addiction (Virtual Tolerance, Virtual Communication), Phubbing, and Perception of Addiction in Nurses

Authors: Fatima Zehra Allahverdi, Nukhet Bayer

Abstract:

Objective: Three questions were formulated to examine stressful working units (intensive care units, emergency unit nurses) utilizing the self-perception theory and social support theory. This study provides a distinctive input by inspecting the combination of variables regarding stressful working environments. Method: The descriptive research was conducted with the participation of 400 nurses working at Ankara City Hospital. The study used Multivariate Analysis of Variance (MANOVA), regression analysis, and a mediation model. Hypothesis one used MANOVA followed by a Scheffe post hoc test. Hypothesis two utilized regression analysis using a hierarchical linear regression model. Hypothesis three used a mediation model. Result: The study utilized mediation analyses. Findings supported the hypotheses that intensive care units have significantly high scores in virtual communication and virtual tolerance. The number of years on the job, virtual communication, virtual tolerance, and phubbing significantly predicted 51% of the variance of perception of addiction. Interestingly, the number of years on the job, while significant, was negatively related to perception of addiction. Conclusion: The reasoning behind these findings and the lack of significance in the emergency unit is discussed. Around 7% of the variance of phubbing was accounted for through working in intensive care units. The model accounted for 26.80 % of the differences in the perception of addiction.

Keywords: phubbing, social media, working units, years on the job, stress

Procedia PDF Downloads 53
18009 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 241
18008 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature

Authors: Aderonke Olaitan Adesina

Abstract:

Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.

Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation

Procedia PDF Downloads 354
18007 Decoupled Dynamic Control of Unicycle Robot Using Integral Linear Quadratic Regulator and Sliding Mode Controller

Authors: Shweda Mohan, J. L. Nandagopal, S. Amritha

Abstract:

This paper focuses on the dynamic modelling of unicycle robot. Two main concepts used for balancing unicycle robot are: reaction wheel pendulum and inverted pendulum. The pitch axis is modelled as inverted pendulum and roll axis is modelled as reaction wheel pendulum. The unicycle yaw dynamics is not considered which makes the derivation of dynamics relatively simple. For the roll controller, sliding-mode controller has been adopted and optimal methods are used to minimize switching-function chattering. For pitch controller, an LQR controller has been implemented to drive the unicycle robot to follow the desired velocity trajectory. The pitching and rolling balance could be achieved by two DC motors. Unicycle robot is a non-holonomic, non-linear, static unbalance system that has the minimal number of point contact to the ground, therefore, it is a perfect platform for researchers to study motion and balance control. These real-time solutions will be a viable solution for advanced robotic systems and controls.

Keywords: decoupled dynamics, linear quadratic regulator (LQR) control, Lyapunov function sliding mode control, unicycle robot, velocity and trajectory control

Procedia PDF Downloads 363
18006 Model of MSD Risk Assessment at Workplace

Authors: K. Sekulová, M. Šimon

Abstract:

This article focuses on upper-extremity musculoskeletal disorders risk assessment model at workplace. In this model are used risk factors that are responsible for musculoskeletal system damage. Based on statistic calculations the model is able to define what risk of MSD threatens workers who are under risk factors. The model is also able to say how MSD risk would decrease if these risk factors are eliminated.

Keywords: ergonomics, musculoskeletal disorders, occupational diseases, risk factors

Procedia PDF Downloads 550
18005 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 407
18004 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival

Procedia PDF Downloads 302
18003 Neutron Contamination in 18 MV Medical Linear Accelerator

Authors: Onur Karaman, A. Gunes Tanir

Abstract:

Photon radiation therapy used to treat cancer is one of the most important methods. However, photon beam collimator materials in Linear Accelerator (LINAC) head generally contains heavy elements is used and the interaction of bremsstrahlung photon with such heavy nuclei, the neutron can be produced inside the treatment rooms. In radiation therapy, neutron contamination contributes to the risk of secondary malignancies in patients, also physicians working in this field. Since the neutron is more dangerous than photon, it is important to determine neutron dose during radiotherapy treatment. In this study, it is aimed to analyze the effect of field size, distance from axis and depth on the amount of in-field and out-field neutron contamination for ElektaVmat accelerator with 18 MV nominal energy. The photon spectra at the distance of 75, 150, 225, 300 cm from target and on the isocenter of beam were scored for 5x5, 10x10, 20x20, 30x30 and 40x40 cm2 fields. Results demonstrated that the neutron spectra and dose are dependent on field size and distances. Beyond 225 cm of isocenter, the dependence of the neutron dose on field size is minimal. As a result, it is concluded that as the open field increases, neutron dose determined decreases. It is important to remember that when treating with high energy photons, the dose from contamination neutrons must be considered as it is much greater than the photon dose.

Keywords: radiotherapy, neutron contamination, linear accelerators, photon

Procedia PDF Downloads 348
18002 Grid Computing for Multi-Objective Optimization Problems

Authors: Aouaouche Elmaouhab, Hassina Beggar

Abstract:

Solving multi-objective discrete optimization applications has always been limited by the resources of one machine: By computing power or by memory, most often both. To speed up the calculations, the grid computing represents a primary solution for the treatment of these applications through the parallelization of these resolution methods. In this work, we are interested in the study of some methods for solving multiple objective integer linear programming problem based on Branch-and-Bound and the study of grid computing technology. This study allowed us to propose an implementation of the method of Abbas and Al on the grid by reducing the execution time. To enhance our contribution, the main results are presented.

Keywords: multi-objective optimization, integer linear programming, grid computing, parallel computing

Procedia PDF Downloads 485