Search results for: full-potential KKR-green’s function method
22327 A Non-Invasive Method for Assessing the Adrenocortical Function in the Roan Antelope (Hippotragus equinus)
Authors: V. W. Kamgang, A. Van Der Goot, N. C. Bennett, A. Ganswindt
Abstract:
The roan antelope (Hippotragus equinus) is the second largest antelope species in Africa. These past decades, populations of roan antelope are declining drastically throughout Africa. This situation resulted in the development of intensive breeding programmes for this species in Southern African, where they are popular game ranching herbivores in with increasing numbers in captivity. Nowadays, avoidance of stress is important when managing wildlife to ensure animal welfare. In this regard, a non-invasive approach to monitor the adrenocortical function as a measure of stress would be preferable, since animals are not disturbed during sample collection. However, to date, a non-invasive method has not been established for the roan antelope. In this study, we validated a non-invasive technique to monitor the adrenocortical function in this species. Herein, we performed an adrenocorticotropic hormone (ACTH) stimulation test at Lapalala reserve Wilderness, South Africa, using adult captive roan antelopes to determine the stress-related physiological responses. Two individually housed roan antelope (a male and a female) received an intramuscular injection with Synacthen depot (Norvatis) loaded into a 3ml syringe (Pneu-Dart) at an estimated dose of 1 IU/kg. A total number of 86 faecal samples (male: 46, female: 40) were collected 5 days before and 3 days post-injection. All samples were then lyophilised, pulverized and extracted with 80% ethanol (0,1g/3ml) and the resulting faecal extracts were analysed for immunoreactive faecal glucocorticoid metabolite (fGCM) concentrations using five enzyme immunoassays (EIAs); (i) 11-oxoaetiocholanolone I (detecting 11,17 dioxoandrostanes), (ii) 11-oxoaetiocholanolone II (detecting fGCM with a 5α-pregnane-3α-ol-11one structure), (iii) a 5α-pregnane-3β-11β,21-triol-20-one (measuring 3β,11β-diol CM), (iv) a cortisol and (v) a corticosterone. In both animals, all EIAs detected an increase in fGCM concentration 100% post-ACTH administration. However, the 11-oxoaetiocholanolone I EIA performed best, with a 20-fold increase in the male (baseline: 0.384 µg/g, DW; peak: 8,585 µg/g DW) and a 17-fold in the female (baseline: 0.323 µg/g DW, peak: 7,276 µg/g DW), measured 17 hours and 12 hours post-administration respectively. These results are important as the ability to assess adrenocortical function non-invasively in roan can now be used as an essential prerequisite to evaluate the effects of stressful circumstances; such as variation of environmental conditions or reproduction in other to improve management strategies for the conservation of this iconic antelope species.Keywords: adrenocorticotropic hormone challenge, adrenocortical function, captive breeding, non-invasive method, roan antelope
Procedia PDF Downloads 14522326 The Impact of Audit Committee Industry Expertise on Internal Audit Function
Authors: Abdulaziz Alzeban
Abstract:
This study examines whether internal audit function is indeed greater when audit committee members have industry expertise combined with auditing expertise. Data from a survey of 64 chief internal auditors from companies registered on the Saudi Stock Exchange TADAWL, provides results that suggest that when audit committee members possess both industry expertise and auditing expertise, the committee’s role in improving the quality of internal audit is enhanced. This outcome is concluded as one that can be generalized beyond the Saudi Arabian context.Keywords: internal audit, audit committee, industry expertise, function
Procedia PDF Downloads 35722325 On the Grid Technique by Approximating the Derivatives of the Solution of the Dirichlet Problems for (1+1) Dimensional Linear Schrodinger Equation
Authors: Lawrence A. Farinola
Abstract:
Four point implicit schemes for the approximation of the first and pure second order derivatives for the solution of the Dirichlet problem for one dimensional Schrodinger equation with respect to the time variable t were constructed. Also, special four-point implicit difference boundary value problems are proposed for the first and pure second derivatives of the solution with respect to the spatial variable x. The Grid method is also applied to the mixed second derivative of the solution of the Linear Schrodinger time-dependent equation. It is assumed that the initial function belongs to the Holder space C⁸⁺ᵃ, 0 < α < 1, the Schrodinger wave function given in the Schrodinger equation is from the Holder space Cₓ,ₜ⁶⁺ᵃ, ³⁺ᵃ/², the boundary functions are from C⁴⁺ᵃ, and between the initial and the boundary functions the conjugation conditions of orders q = 0,1,2,3,4 are satisfied. It is proven that the solution of the proposed difference schemes converges uniformly on the grids of the order O(h²+ k) where h is the step size in x and k is the step size in time. Numerical experiments are illustrated to support the analysis made.Keywords: approximation of derivatives, finite difference method, Schrödinger equation, uniform error
Procedia PDF Downloads 12022324 Reliability-Based Method for Assessing Liquefaction Potential of Soils
Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty
Abstract:
This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering
Procedia PDF Downloads 47022323 Lung Function, Urinary Heavy Metals And ITS Other Influencing Factors Among Community In Klang Valley
Authors: Ammar Amsyar Abdul Haddi, Mohd Hasni Jaafar
Abstract:
Heavy metals are elements naturally presented in the environment that can cause adverse effect to health. But not much literature was found on effects toward lung function, where impairment of lung function may lead to various lung diseases. The objective of the study is to explore the lung function impairment, urinary heavy metal level, and its associated factors among the community in Klang valley, Malaysia. Sampling was done in Kuala Lumpur suburb public and housing areas during community events throughout March 2019 till October 2019. respondents who gave the consent were given a questionnaire to answer and was proceeded with a lung function test. Urine samples were obtained at the end of the session and sent for Inductively coupled plasma mass spectrometry (ICP-MS) analysis for heavy metal cadmium (Cd) and lead (Pb) concentration. A total of 200 samples were analysed, and of all, 52% of respondents were male, Age ranging from 18 years old to 74 years old with a mean age of 38.44. Urinary samples show that 12% of the respondent (n=22) has Cd level above than average, and 1.5 % of the respondent (n=3) has urinary Pb at an above normal level. Bivariate analysis show that there was a positive correlation between urinary Cd and urinary Pb (r= 0.309; p<0.001). Furthermore, there was a negative correlation between urinary Cd level and full vital capacity (FVC) (r=-0.202, p=0.004), Force expiratory volume at 1 second (FEV1) (r = -0.225, p=0.001), and also with Force expiratory flow between 25-75% FVC (FEF25%-75%) (r= -0.187, p=0.008). however, urinary Pb did not show any association with FVC, FEV1, FEV1/FVC, or FEF25%-75%. Multiple linear regression analysis shows that urinary Cd remained significant and negatively affect FVC% (p=0.025) and FEV1% (p=0.004) achieved from the predicted value. On top of that, other factors such as education level (p=0.013) and duration of smoking(p=0.003) may influencing both urinary Cd and performance in lung function as well, suggesting Cd as a potential mediating factor between smoking and impairment of lung function. however, there was no interaction detected between heavy metal or other influencing factor in this study. In short, there is a negative linear relationship detected between urinary Cd and lung function, and urinary Cd is likely to affects lung function in a restrictive pattern. Since smoking is also an influencing factor for urinary Cd and lung function impairment, it is highly suggested that smokers should be screened for lung function and urinary Cd level in the future for early disease prevention.Keywords: lung function, heavy metals, community
Procedia PDF Downloads 15622322 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 64722321 The Effects of Music Therapy on Positive Negative Syndrome Scale, Cognitive Function, and Quality of Life in Female Schizophrenic Patients
Authors: Elmeida Effendy, Mustafa M. Amin, Nauli Aulia Lubis, P. J. Sirait
Abstract:
Music therapy may have an effect on mental illnesses. This is a comparative, quasi-experimental study to examine the effect of music therapy added to standard care on Positive Negative Syndrome Scale, Cognitive Function and Quality of Life in female schizophrenic patients. 50 schizophrenic participants who were diagnosed with semistructured MINI ICD-X, were assigned into two groups received pharmacotherapy. Participants were assigned into each group of therapy by using matched allocation method. Music therapy added on to the first group. They received music therapy, using Mozart Sonata four times a week, over a period of six week. Positive and negative symptoms were measured by using Positive and Negative Syndrome Scale (PANSS). Cognitive function were measured by using Mini Mental State Examination (MMSE) and Montreal Cognitive Assessment (MOCA). All rating scale were administrated by certified skill residents every week after music therapy session. The participants who were received pharmaco-and-music therapy significantly showed greater response than who received pharmacotherapy only. The mean difference of response were -6,6164 (p=0,001) for PANNS, 2,911 (p=0,004) for MMSE, 3,618 (p=0,001) for MOCA, 4,599 (p=0,001) for SF-36. Music therapy have beneficial effects on PANSS, Cognitive Function and Quality of Life in schizophrenic patients.Keywords: music therapy, rating scale, schizophrenia, symptoms
Procedia PDF Downloads 34722320 Nonparametric Path Analysis with a Truncated Spline Approach in Modeling Waste Management Behavior Patterns
Authors: Adji Achmad Rinaldo Fernandes, Usriatur Rohma
Abstract:
Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best truncated spline nonparametric path function between linear and quadratic polynomial degrees with 1, 2, and 3 knot points and to determine the significance of estimating the best truncated spline nonparametric path function in the model of the effect of perceived benefits and perceived convenience on behavior to convert waste into economic value through the intention variable of changing people's mindset about waste using the t test statistic at the jackknife resampling stage. The data used in this study are primary data obtained from research grants. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3 knot points. In addition, the significance of the best truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.Keywords: nonparametric path analysis, truncated spline, linear, kuadratic, behavior to turn waste into economic value, jackknife resampling
Procedia PDF Downloads 4722319 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation
Authors: Carl van Walraven, Meltem Tuna
Abstract:
Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation
Procedia PDF Downloads 23522318 Safety Approach Highway Alignment Optimization
Authors: Seyed Abbas Tabatabaei, Marjan Naderan Tahan, Arman Kadkhodai
Abstract:
An efficient optimization approach, called feasible gate (FG), is developed to enhance the computation efficiency and solution quality of the previously developed highway alignment optimization (HAO) model. This approach seeks to realistically represent various user preferences and environmentally sensitive areas and consider them along with geometric design constraints in the optimization process. This is done by avoiding the generation of infeasible solutions that violate various constraints and thus focusing the search on the feasible solutions. The proposed method is simple, but improves significantly the model’s computation time and solution quality. On the other, highway alignment optimization through Feasible Gates, eventuates only economic model by considering minimum design constrains includes minimum reduce of circular curves, minimum length of vertical curves and road maximum gradient. This modelling can reduce passenger comfort and road safety. In most of highway optimization models, by adding penalty function for each constraint, final result handles to satisfy minimum constraint. In this paper, we want to propose a safety-function solution by introducing gift function.Keywords: safety, highway geometry, optimization, alignment
Procedia PDF Downloads 40922317 Parallel Evaluation of Sommerfeld Integrals for Multilayer Dyadic Green's Function
Authors: Duygu Kan, Mehmet Cayoren
Abstract:
Sommerfeld-integrals (SIs) are commonly encountered in electromagnetics problems involving analysis of antennas and scatterers embedded in planar multilayered media. Generally speaking, the analytical solution of SIs is unavailable, and it is well known that numerical evaluation of SIs is very time consuming and computationally expensive due to the highly oscillating and slowly decaying nature of the integrands. Therefore, fast computation of SIs has a paramount importance. In this paper, a parallel code has been developed to speed up the computation of SI in the framework of calculation of dyadic Green’s function in multilayered media. OpenMP shared memory approach is used to parallelize the SI algorithm and resulted in significant time savings. Moreover accelerating the computation of dyadic Green’s function is discussed based on the parallel SI algorithm developed.Keywords: Sommerfeld-integrals, multilayer dyadic Green’s function, OpenMP, shared memory parallel programming
Procedia PDF Downloads 24722316 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 13622315 Combining an Optimized Closed Principal Curve-Based Method and Evolutionary Neural Network for Ultrasound Prostate Segmentation
Authors: Tao Peng, Jing Zhao, Yanqing Xu, Jing Cai
Abstract:
Due to missing/ambiguous boundaries between the prostate and neighboring structures, the presence of shadow artifacts, as well as the large variability in prostate shapes, ultrasound prostate segmentation is challenging. To handle these issues, this paper develops a hybrid method for ultrasound prostate segmentation by combining an optimized closed principal curve-based method and the evolutionary neural network; the former can fit curves with great curvature and generate a contour composed of line segments connected by sorted vertices, and the latter is used to express an appropriate map function (represented by parameters of evolutionary neural network) for generating the smooth prostate contour to match the ground truth contour. Both qualitative and quantitative experimental results showed that our proposed method obtains accurate and robust performances.Keywords: ultrasound prostate segmentation, optimized closed polygonal segment method, evolutionary neural network, smooth mathematical model, principal curve
Procedia PDF Downloads 20022314 The Unsteady Non-Equilibrium Distribution Function and Exact Equilibrium Time for a Dilute Gas Affected by Thermal Radiation Field
Authors: Taha Zakaraia Abdel Wahid
Abstract:
The behavior of the unsteady non-equilibrium distribution function for a dilute gas under the effect of non-linear thermal radiation field is presented. For the best of our knowledge this is done for the first time at all. The distinction and comparisons between the unsteady perturbed and the unsteady equilibrium velocity distribution functions are illustrated. The equilibrium time for the dilute gas is determined for the first time. The non-equilibrium thermodynamic properties of the system (gas+the heated plate) are investigated. The results are applied to the Argon gas, for various values of radiation field intensity. 3D-Graphics illustrating the calculated variables are drawn to predict their behavior. The results are discussed.Keywords: dilute gas, radiation field, exact solutions, travelling wave method, unsteady BGK model, irreversible thermodynamics, unsteady non-equilibrium distribution functions
Procedia PDF Downloads 49522313 Stabilization of a Three-Pole Active Magnetic Bearing by Hybrid Control Method in Static Mode
Authors: Mahdi Kiani, Hassan Salarieh, Aria Alasty, S. Mahdi Darbandi
Abstract:
The design and implementation of the hybrid control method for a three-pole active magnetic bearing (AMB) is proposed in this paper. The system is inherently nonlinear and conventional nonlinear controllers are a little complicated, while the proposed hybrid controller has a piecewise linear form, i.e. linear in each sub-region. A state-feedback hybrid controller is designed in this study, and the unmeasurable states are estimated by an observer. The gains of the hybrid controller are obtained by the Linear Quadratic Regulator (LQR) method in each sub-region. To evaluate the performance, the designed controller is implemented on an experimental setup in static mode. The experimental results show that the proposed method can efficiently stabilize the three-pole AMB system. The simplicity of design, domain of attraction, uncomplicated control law, and computational time are advantages of this method over other nonlinear control strategies in AMB systems.Keywords: active magnetic bearing, three pole AMB, hybrid control, Lyapunov function
Procedia PDF Downloads 34122312 Approximate Solution to Non-Linear Schrödinger Equation with Harmonic Oscillator by Elzaki Decomposition Method
Authors: Emad K. Jaradat, Ala’a Al-Faqih
Abstract:
Nonlinear Schrödinger equations are regularly experienced in numerous parts of science and designing. Varieties of analytical methods have been proposed for solving these equations. In this work, we construct an approximate solution for the nonlinear Schrodinger equations, with harmonic oscillator potential, by Elzaki Decomposition Method (EDM). To illustrate the effects of harmonic oscillator on the behavior wave function, nonlinear Schrodinger equation in one and two dimensions is provided. The results show that, it is more perfectly convenient and easy to apply the EDM in one- and two-dimensional Schrodinger equation.Keywords: non-linear Schrodinger equation, Elzaki decomposition method, harmonic oscillator, one and two-dimensional Schrodinger equation
Procedia PDF Downloads 18722311 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 18922310 Optimal Design of RC Pier Accompanied with Multi Sliding Friction Damping Mechanism Using Combination of SNOPT and ANN Method
Authors: Angga S. Fajar, Y. Takahashi, J. Kiyono, S. Sawada
Abstract:
The structural system concept of RC pier accompanied with multi sliding friction damping mechanism was developed based on numerical analysis approach. However in the implementation, to make design for such kind of this structural system consumes a lot of effort in case high of complexity. During making design, the special behaviors of this structural system should be considered including flexible small deformation, sufficient elastic deformation capacity, sufficient lateral force resistance, and sufficient energy dissipation. The confinement distribution of friction devices has significant influence to its. Optimization and prediction with multi function regression of this structural system expected capable of providing easier and simpler design method. The confinement distribution of friction devices is optimized with SNOPT in Opensees, while some design variables of the structure are predicted using multi function regression of ANN. Based on the optimization and prediction this structural system is able to be designed easily and simply.Keywords: RC Pier, multi sliding friction device, optimal design, flexible small deformation
Procedia PDF Downloads 36722309 An Automatic Method for Building Learners’ Groups in Virtual Environment
Authors: O. Bourkoukou, Essaid El Bachari
Abstract:
The group composing is one of the key issue in collaborative learning to achieve a positive educational experience. The goal of this work is to propose for teachers and tutors a method to create effective collaborative learning groups in e-learning environment based on the learner profile. For this purpose, a new function was defined to rate implicitly learning objects used by the learner during his learning experience. This paper describes the proposed algorithm to build an adequate collaborative learning group. In order to verify the performance of the proposed algorithm, several experiments were conducted in real data set in virtual environment. Results show the effectiveness of the method for which it appears that the proposed approach may be promising to produce better outcomes.Keywords: building groups, collaborative learning, e-learning, learning objects
Procedia PDF Downloads 29722308 Dynamics and Advection in a Vortex Parquet on the Plane
Authors: Filimonova Alexanra
Abstract:
Inviscid incompressible fluid flows are considered. The object of the study is a vortex parquet – a structure consisting of distributed vortex spots of different directions, occupying the entire plane. The main attention is paid to the study of advection processes of passive particles in the corresponding velocity field. The dynamics of the vortex structures is considered in a rectangular region under the assumption that periodic boundary conditions are imposed on the stream function. Numerical algorithms are based on the solution of the initial-boundary value problem for nonstationary Euler equations in terms of vorticity and stream function. For this, the spectral-vortex meshless method is used. It is based on the approximation of the stream function by the Fourier series cut and the approximation of the vorticity field by the least-squares method from its values in marker particles. A vortex configuration, consisting of four vortex patches is investigated. Results of a numerical study of the dynamics and interaction of the structure are presented. The influence of the patch radius and the relative position of positively and negatively directed patches on the processes of interaction and mixing is studied. The obtained results correspond to the following possible scenarios: the initial configuration does not change over time; the initial configuration forms a new structure, which is maintained for longer times; the initial configuration returns to its initial state after a certain period of time. The processes of mass transfer of vorticity by liquid particles on a plane were calculated and analyzed. The results of a numerical analysis of the particles dynamics and trajectories on the entire plane and the field of local Lyapunov exponents are presented.Keywords: ideal fluid, meshless methods, vortex structures in liquids, vortex parquet.
Procedia PDF Downloads 6422307 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method
Authors: Jiahui You, Kyung Jae Lee
Abstract:
Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.Keywords: reactive-transport , Shale, Kerogen, precipitation
Procedia PDF Downloads 16322306 Estimation of Coefficient of Discharge of Side Trapezoidal Labyrinth Weir Using Group Method of Data Handling Technique
Authors: M. A. Ansari, A. Hussain, A. Uddin
Abstract:
A side weir is a flow diversion structure provided in the side wall of a channel to divert water from the main channel to a branch channel. The trapezoidal labyrinth weir is a special type of weir in which crest length of the weir is increased to pass higher discharge. Experimental and numerical studies related to the coefficient of discharge of trapezoidal labyrinth weir in an open channel have been presented in the present study. Group Method of Data Handling (GMDH) with the transfer function of quadratic polynomial has been used to predict the coefficient of discharge for the side trapezoidal labyrinth weir. A new model is developed for coefficient of discharge of labyrinth weir by regression method. Generalized models for predicting the coefficient of discharge for labyrinth weir using Group Method of Data Handling (GMDH) network have also been developed. The prediction based on GMDH model is more satisfactory than those given by traditional regression equations.Keywords: discharge coefficient, group method of data handling, open channel, side labyrinth weir
Procedia PDF Downloads 16022305 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function
Authors: S. B. Provost, Hossein Zareamoghaddam
Abstract:
A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.Keywords: density estimation, log-density, moments, Pearson's curve system
Procedia PDF Downloads 28022304 Optimal Scheduling for Energy Storage System Considering Reliability Constraints
Authors: Wook-Won Kim, Je-Seok Shin, Jin-O Kim
Abstract:
This paper propose the method for optimal scheduling for battery energy storage system with reliability constraint of energy storage system in reliability aspect. The optimal scheduling problem is solved by dynamic programming with proposed transition matrix. Proposed optimal scheduling method guarantees the minimum fuel cost within specific reliability constraint. For evaluating proposed method, the timely capacity outage probability table (COPT) is used that is calculated by convolution of probability mass function of each generator. This study shows the result of optimal schedule of energy storage system.Keywords: energy storage system (ESS), optimal scheduling, dynamic programming, reliability constraints
Procedia PDF Downloads 40622303 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 39922302 Strengthening of Concrete Slabs with Steel Beams
Authors: Mizam Doğan
Abstract:
In service life; structures can be damaged if they are subjected to dead and live loads which are greater than design values. For preventing this case; possible loads must be correctly calculated, structure must be designed according to determined loads, and structure must not be used out of its function. If loading case of the structure changes when its function changes; it must be reinforced for continuing it is new function. Reinforcement is a process that is made by increasing the existing strengths of structural system elements of the structure as reinforced concrete walls, beams, and slabs. Reinforcement can be done by casting reinforced concrete, placing steel and fiber structural elements. In this paper, reinforcing of columns and slabs of a structure of which function is changed is studied step by step. This reinforcement is made for increasing vertical and lateral load carrying capacity of the building. Not for repairing damaged structural system.Keywords: strengthening, RC slabs, seismic load, steel beam, structural irregularity
Procedia PDF Downloads 26022301 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design
Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad
Abstract:
In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method
Procedia PDF Downloads 24222300 Parameter Estimation for the Mixture of Generalized Gamma Model
Authors: Wikanda Phaphan
Abstract:
Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method
Procedia PDF Downloads 21922299 Synthesis of a Model Predictive Controller for Artificial Pancreas
Authors: Mohamed El Hachimi, Abdelhakim Ballouk, Ilyas Khelafa, Abdelaziz Mouhou
Abstract:
Introduction: Type 1 diabetes occurs when beta cells are destroyed by the body's own immune system. Treatment of type 1 diabetes mellitus could be greatly improved by applying a closed-loop control strategy to insulin delivery, also known as an Artificial Pancreas (AP). Method: In this paper, we present a new formulation of the cost function for a Model Predictive Control (MPC) utilizing a technic which accelerates the speed of control of the AP and tackles the nonlinearity of the control problem via asymmetric objective functions. Finding: The finding of this work consists in a new Model Predictive Control algorithm that leads to good performances like decreasing the time of hyperglycaemia and avoiding hypoglycaemia. Conclusion: These performances are validated under in silico trials.Keywords: artificial pancreas, control algorithm, biomedical control, MPC, objective function, nonlinearity
Procedia PDF Downloads 30722298 Evaluating the Rationality of Airport Design from the Perspective of Passenger Experience: An Example of Terminal 3 of Beijing Capital International Airport
Authors: Yan Li, Yujiang Gao
Abstract:
Passengers are the main users of the airport. Whether the travel experience of passengers in the airport is comfortable or not is an important indicator for evaluating the reasonableness of airport design. Taking the Terminal 3 of Beijing Capital International Airport as an example, this paper analyzes the airport’s solution to the problem of passengers’ inconvenience caused by lost directions, excessive congestion, and excessively long streamlines during passenger use. First of all, by using the method of analyzing the design of architectural function streamlines, the design of interior spaces of buildings, and the interrelationship between interior design and passenger experience, it was first concluded that the airport is capable of performing the two major problems of easy disorientation and excessive congestion. Later, by using the method of analyzing architectural function streamlines and collecting passenger experience evaluations, it was concluded that the airport could not solve the inconvenience caused by excessively long streamlines to passengers. Finally came to the conclusion that the airport design meets the demand in terms of the overall design of the passenger experience, but the boarding line is still relatively long and some fly in the ointment.Keywords: passengers’ experience, terminal 3 of Beijing capital international airport, lost directions, excessive congestion, excessively long streamlines
Procedia PDF Downloads 197