Search results for: optimal homotopy perturbation method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20789

Search results for: optimal homotopy perturbation method

19019 Numerical Solution of Two-Dimensional Solute Transport System Using Operational Matrices

Authors: Shubham Jaiswal

Abstract:

In this study, the numerical solution of two-dimensional solute transport system in a homogeneous porous medium of finite-length is obtained. The considered transport system have the terms accounting for advection, dispersion and first-order decay with first-type boundary conditions. Initially, the aquifer is considered solute free and a constant input-concentration is considered at inlet boundary. The solution is describing the solute concentration in rectangular inflow-region of the homogeneous porous media. The numerical solution is derived using a powerful method viz., spectral collocation method. The numerical computation and graphical presentations exhibit that the method is effective and reliable during solution of the physical model with complicated boundary conditions even in the presence of reaction term.

Keywords: two-dimensional solute transport system, spectral collocation method, Chebyshev polynomials, Chebyshev differentiation matrix

Procedia PDF Downloads 213
19018 Polymer Patterning by Dip Pen Nanolithography

Authors: Ayse Cagil Kandemir, Derya Erdem, Markus Niederberger, Ralph Spolenak

Abstract:

Dip Pen nanolithography (DPN), which is a tip based method, serves a novel approach to produce nano and micro-scaled patterns due to its high resolution and pattern flexibility. It is introduced as a new constructive scanning probe lithography (SPL) technique. DPN delivers materials in the form of an ink by using the tip of a cantilever as pen and substrate as paper in order to form surface architectures. First studies rely on delivery of small organic molecules on gold substrate in ambient conditions. As time passes different inks such as; polymers, colloidal particles, oligonucleotides, metallic salts were examined on a variety of surfaces. Discovery of DPN also enabled patterning with multiple inks by using multiple cantilevers for the first time in SPL history. Specifically, polymer inks, which constitute a flexible matrix for various materials, can have a potential in MEMS, NEMS and drug delivery applications. In our study, it is aimed to construct polymer patterns using DPN by studying wetting behavior of polymer on semiconductor, metal and polymer surfaces. The optimum viscosity range of polymer and effect of environmental conditions such as humidity and temperature are examined. It is observed that there is an inverse relation with ink viscosity and depletion time. This study also yields the optimal writing conditions to produce consistent patterns with DPN. It is shown that written dot sizes increase with dwell time, indicating that the examined writing conditions yield repeatable patterns.

Keywords: dip pen nanolithography, polymer, surface patterning, surface science

Procedia PDF Downloads 378
19017 Assessment of Frying Material by Deep-Fat Frying Method

Authors: Brinda Sharma, Saakshi S. Sarpotdar

Abstract:

Deep-fat frying is popular standard method that has been studied basically to clarify the complicated mechanisms of fat decomposition at high temperatures and to assess their effects on human health. The aim of this paper is to point out the application of method engineering that has been recently improved our understanding of the fundamental principles and mechanisms concerned at different scales and different times throughout the process: pretreatment, frying, and cooling. It covers the several aspects of deep-fat drying. New results regarding the understanding of the frying method that are obtained as a results of major breakthroughs in on-line instrumentation (heat, steam flux, and native pressure sensors), within the methodology of microstructural and imaging analysis (NMR, MRI, SEM) and in software system tools for the simulation of coupled transfer and transport phenomena. Such advances have opened the approach for the creation of significant information of the behavior of varied materials and to the event of latest tools to manage frying operations via final product quality in real conditions. Lastly, this paper promotes an integrated approach to the frying method as well as numerous competencies like those of chemists, engineers, toxicologists, nutritionists, and materials scientists also as of the occupation and industrial sectors.

Keywords: frying, cooling, imaging analysis (NMR, MRI, SEM), deep-fat frying

Procedia PDF Downloads 410
19016 H.264 Video Privacy Protection Method Using Regions of Interest Encryption

Authors: Taekyun Doo, Cheongmin Ji, Manpyo Hong

Abstract:

Like a closed-circuit television (CCTV), video surveillance system is widely placed for gathering video from unspecified people to prevent crime, surveillance, or many other purposes. However, abuse of CCTV brings about concerns of personal privacy invasions. In this paper, we propose an encryption method to protect personal privacy system in H.264 compressed video bitstream with encrypting only regions of interest (ROI). There is no need to change the existing video surveillance system. In addition, encrypting ROI in compressed video bitstream is a challenging work due to spatial and temporal drift errors. For this reason, we propose a novel drift mitigation method when ROI is encrypted. The proposed method was implemented by using JM reference software based on the H.264 compressed videos, and experimental results show the verification of our proposed methods and its effectiveness.

Keywords: H.264/AVC, video encryption, privacy protection, post compression, region of interest

Procedia PDF Downloads 323
19015 A New Approach for Solving Fractional Coupled Pdes

Authors: Prashant Pandey

Abstract:

In the present article, an effective Laguerre collocation method is used to obtain the approximate solution of a system of coupled fractional-order non-linear reaction-advection-diffusion equation with prescribed initial and boundary conditions. In the proposed scheme, Laguerre polynomials are used together with an operational matrix and collocation method to obtain approximate solutions of the coupled system, so that our proposed model is converted into a system of algebraic equations which can be solved employing the Newton method. The solution profiles of the coupled system are presented graphically for different particular cases. The salient feature of the present article is finding the stability analysis of the proposed method and also the demonstration of the lower variation of solute concentrations with respect to the column length in the fractional-order system compared to the integer-order system. To show the higher efficiency, reliability, and accuracy of the proposed scheme, a comparison between the numerical results of Burger’s coupled system and its existing analytical result is reported. There are high compatibility and consistency between the approximate solution and its exact solution to a higher order of accuracy. The exhibition of error analysis for each case through tables and graphs confirms the super-linearly convergence rate of the proposed method.

Keywords: fractional coupled PDE, stability and convergence analysis, diffusion equation, Laguerre polynomials, spectral method

Procedia PDF Downloads 128
19014 Non-Destructive Technique for Detection of Voids in the IC Package Using Terahertz-Time Domain Spectrometer

Authors: Sung-Hyeon Park, Jin-Wook Jang, Hak-Sung Kim

Abstract:

In recent years, Terahertz (THz) time-domain spectroscopy (TDS) imaging method has been received considerable interest as a promising non-destructive technique for detection of internal defects. In comparison to other non-destructive techniques such as x-ray inspection method, scanning acoustic tomograph (SAT) and microwave inspection method, THz-TDS imaging method has many advantages: First, it can measure the exact thickness and location of defects. Second, it doesn’t require the liquid couplant while it is very crucial to deliver that power of ultrasonic wave in SAT method. Third, it didn’t damage to materials and be harmful to human bodies while x-ray inspection method does. Finally, it exhibits better spatial resolution than microwave inspection method. However, this technology couldn’t be applied to IC package because THz radiation can penetrate through a wide variety of materials including polymers and ceramics except of metals. Therefore, it is difficult to detect the defects in IC package which are composed of not only epoxy and semiconductor materials but also various metals such as copper, aluminum and gold. In this work, we proposed a special method for detecting the void in the IC package using THz-TDS imaging system. The IC package specimens for this study are prepared by Packaging Engineering Team in Samsung Electronics. Our THz-TDS imaging system has a special reflection mode called pitch-catch mode which can change the incidence angle in the reflection mode from 10 o to 70 o while the others have transmission and the normal reflection mode or the reflection mode fixed at certain angle. Therefore, to find the voids in the IC package, we investigated the appropriate angle as changing the incidence angle of THz wave emitter and detector. As the results, the voids in the IC packages were successfully detected using our THz-TDS imaging system.

Keywords: terahertz, non-destructive technique, void, IC package

Procedia PDF Downloads 458
19013 Developing a Spatial Transport Model to Determine Optimal Routes When Delivering Unprocessed Milk

Authors: Sunday Nanosi Ndovi, Patrick Albert Chikumba

Abstract:

In Malawi, smallholder dairy farmers transport unprocessed milk to sell at Milk Bulking Groups (MBGs). MBGs store and chill the milk while awaiting collection by processors. The farmers deliver milk using various modes of transportation such as foot, bicycle, and motorcycle. As a perishable food, milk requires timely transportation to avoid deterioration. In other instances, some farmers bypass the nearest MBGs for facilities located further away. Untimely delivery worsens quality and results in rejection at MBG. Subsequently, these rejections lead to revenue losses for dairy farmers. Therefore, the objective of this study was to optimize routes when transporting milk by selecting the shortest route using time as a cost attribute in Geographic Information Systems (GIS). A spatially organized transport system impedes milk deterioration while promoting profitability for dairy farmers. A transportation system was modeled using Route Analysis and Closest Facility network extensions. The final output was to find the quickest routes and identify the nearest milk facilities from incidents. Face-to-face interviews targeted leaders from all 48 MBGs in the study area and 50 farmers from Namahoya MBG. During field interviews, coordinates were captured in order to create maps. Subsequently, maps supported the selection of optimal routes based on the least travel times. The questionnaire targeted 200 respondents. Out of the total, 182 respondents were available. Findings showed that out of the 50 sampled farmers that supplied milk to Namahoya, only 8% were nearest to the facility, while 92% were closest to 9 different MBGs. Delivering milk to the nearest MBGs would minimize travel time and distance by 14.67 hours and 73.37 km, respectively.

Keywords: closest facility, milk, route analysis, spatial transport

Procedia PDF Downloads 33
19012 Value in Exchange: The Importance of Users Interaction as the Center of User Experiences

Authors: Ramlan Jantan, Norfadilah Kamaruddin, Shahriman Zainal Abidin

Abstract:

In this era of technology, the co-creation method has become a new development trend. In this light, most design businesses have currently transformed their development strategy from being goods-dominant into service-dominant where more attention is given to the end-users and their roles in the development process. As a result, the conventional development process has been replaced with a more cooperative one. Consequently, numerous studies have been conducted to explore the extension of co-creation method in the design development process and most studies have focused on issues found during the production process. In the meantime, this study aims to investigate potential values established during the pre-production process, which is also known as the ‘circumstances value creation’. User involvement is questioned and crucially debate at the entry level of pre-production process in value in-exchange jointly spheres; thus user experiences took place. Thus, this paper proposed a potential framework of the co-creation method for Malaysian interactive product development. The framework is formulated from both parties involved: the users and designers. The framework will clearly give an explanation of the value of the co-creation method, and it could assist relevant design industries/companies in developing a blueprint for the design process. This paper further contributes to the literature on the co-creation of value and digital ecosystems.

Keywords: co-creation method, co-creation framework, co-creation, co-production

Procedia PDF Downloads 154
19011 Effect of Adjacent Footings on Elastic Settlement of Shallow Foundations

Authors: Mustafa Aytekin

Abstract:

In this study, impact of adjacent footings is considered on the estimation of elastic settlement of shallow foundations. In the estimation of elastic settlement, the Schmertmann’s method that is a very popular method in the elastic settlement estimation of shallow foundations is employed. In order to consider affect of neighboring footings on elastic settlement of main footing in different configurations, a MATLAB script has been generated. Elastic settlements of the various configurations are estimated by the script and several conclusions have been reached.

Keywords: elastic (immediate) settlement, Schmertman Method, adjacent footings, shallow foundations

Procedia PDF Downloads 451
19010 Performance of Non-Deterministic Structural Optimization Algorithms Applied to a Steel Truss Structure

Authors: Ersilio Tushaj

Abstract:

The efficient solution that satisfies the optimal condition is an important issue in the structural engineering design problem. The new codes of structural design consist in design methodology that looks after the exploitation of the total resources of the construction material. In recent years some non-deterministic or meta-heuristic structural optimization algorithms have been developed widely in the research community. These methods search the optimum condition starting from the simulation of a natural phenomenon, such as survival of the fittest, the immune system, swarm intelligence or the cooling process of molten metal through annealing. Among these techniques the most known are: the genetic algorithms, simulated annealing, evolution strategies, particle swarm optimization, tabu search, ant colony optimization, harmony search and big bang crunch optimization. In this study, five of these algorithms are applied for the optimum weight design of a steel truss structure with variable geometry but fixed topology. The design process selects optimum distances and size sections from a set of commercial steel profiles. In the formulation of the design problem are considered deflection limitations, buckling and allowable stress constraints. The approach is repeated starting from different initial populations. The design problem topology is taken from an existing steel structure. The optimization process helps the engineer to achieve good final solutions, avoiding the repetitive evaluation of alternative designs in a time consuming process. The algorithms used for the application, the results of the optimal solutions, the number of iterations and the minimal weight designs, will be reported in the paper. Based on these results, it would be estimated, the amount of the steel that could be saved by applying structural analysis combined with non-deterministic optimization methods.

Keywords: structural optimization, non-deterministic methods, truss structures, steel truss

Procedia PDF Downloads 206
19009 Nickel Electroplating in Post Supercritical CO2 Mixed Watts Bath under Different Agitations

Authors: Chun-Ying Lee, Kun-Hsien Lee, Bor-Wei Wang

Abstract:

The process of post-supercritical CO2 electroplating uses the electrolyte solution after being mixed with supercritical CO2 and released to atmospheric pressure. It utilizes the microbubbles that form when oversaturated CO2 in the electrolyte returns to gaseous state, which gives the similar effect of pulsed electroplating. Under atmospheric pressure, the CO2 bubbles gradually diffuse. Therefore, the introduction of ultrasound and/or other agitation can potentially excite the CO2 microbubbles to achieve an electroplated surface of even higher quality. In this study, during the electroplating process, three different modes of agitation: magnetic stirrer agitation, ultrasonic agitation and a combined mode (magnetic + ultrasonic) were applied, respectively, in order to obtain an optimal surface morphology and mechanical properties for the electroplated Ni coating. It is found that the combined agitation mode at a current density of 40 A/dm2 achieved the smallest grain size, lower surface roughness, and produced an electroplated Ni layer that achieved hardness of 320 HV, much higher when compared with conventional method, which were usually in the range of 160 to 300 HV. However, at the same time, the electroplating with combined agitation developed a higher internal stress of 320 MPa due to the lower current efficiency of the process and finer grain in the coating. Moreover, a new control methodology for tailoring the coating’s mechanical property through its thickness was demonstrated by the timely introduction of ultrasonic agitation during the electroplating process with post supercritical CO2 mixed electrolyte.

Keywords: nickel electroplating, micro-bubbles, supercritical carbon dioxide, ultrasonic agitation

Procedia PDF Downloads 260
19008 Function Approximation with Radial Basis Function Neural Networks via FIR Filter

Authors: Kyu Chul Lee, Sung Hyun Yoo, Choon Ki Ahn, Myo Taeg Lim

Abstract:

Recent experimental evidences have shown that because of a fast convergence and a nice accuracy, neural networks training via extended Kalman filter (EKF) method is widely applied. However, as to an uncertainty of the system dynamics or modeling error, the performance of the method is unreliable. In order to overcome this problem in this paper, a new finite impulse response (FIR) filter based learning algorithm is proposed to train radial basis function neural networks (RBFN) for nonlinear function approximation. Compared to the EKF training method, the proposed FIR filter training method is more robust to those environmental conditions. Furthermore, the number of centers will be considered since it affects the performance of approximation.

Keywords: extended Kalman filter, classification problem, radial basis function networks (RBFN), finite impulse response (FIR) filter

Procedia PDF Downloads 440
19007 An Efficient Fundamental Matrix Estimation for Moving Object Detection

Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung

Abstract:

In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.

Keywords: corner detection, optical flow, epipolar geometry, RANSAC

Procedia PDF Downloads 386
19006 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 287
19005 Na Promoted Ni/γ-Al2O3 Catalysts Prepared by Solution Combustion Method for Syngas Methanation

Authors: Yan Zeng, Hongfang Ma, Haitao Zhang, Weiyong Ying

Abstract:

Ni-based catalysts with different amounts of Na as promoter from 2 to 6 wt % were prepared by solution combustion method. The catalytic activity was investigated in syngas methanation reaction. Carbon oxides conversion and methane selectivity are greatly influenced by sodium loading. Adding 2 wt% Na remarkably improves catalytic activity and long-term stability, attributed to its smaller mean NiO particle size, better distribution, and milder metal-support interaction. However, excess addition of Na results in deactivation distinctly due to the blockage of active sites.

Keywords: nickel catalysts, syngas methanation, sodium, solution combustion method

Procedia PDF Downloads 390
19004 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 122
19003 Comparison of DPC and FOC Vector Control Strategies on Reducing Harmonics Caused by Nonlinear Load in the DFIG Wind Turbine

Authors: Hamid Havasi, Mohamad Reza Gholami Dehbalaei, Hamed Khorami, Shahram Karimi, Hamdi Abdi

Abstract:

Doubly-fed induction generator (DFIG) equipped with a power converter is an efficient tool for converting mechanical energy of a variable speed system to a fixed-frequency electrical grid. Since electrical energy sources faces with production problems such as harmonics caused by nonlinear loads, so in this paper, compensation performance of DPC and FOC method on harmonics reduction of a DFIG wind turbine connected to a nonlinear load in MATLAB Simulink model has been simulated and effect of each method on nonlinear load harmonic elimination has been compared. Results of the two mentioned control methods shows the advantage of the FOC method on DPC method for harmonic compensation. Also, the fifth and seventh harmonic components of the network and THD greatly reduced.

Keywords: DFIG machine, energy conversion, nonlinear load, THD, DPC, FOC

Procedia PDF Downloads 568
19002 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model

Authors: S. Channgam, A. Sae-Tang, T. Termsaithong

Abstract:

In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.

Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method

Procedia PDF Downloads 360
19001 Computer Simulations of Stress Corrosion Studies of Quartz Particulate Reinforced ZA-27 Metal Matrix Composites

Authors: K. Vinutha

Abstract:

The stress corrosion resistance of ZA-27 / TiO2 metal matrix composites (MMC’s) in high temperature acidic media has been evaluated using an autoclave. The liquid melt metallurgy technique using vortex method was used to fabricate MMC’s. TiO2 particulates of 50-80 µm in size are added to the matrix. ZA-27 containing 2,4,6 weight percentage of TiO2 are prepared. Stress corrosion tests were conducted by weight loss method for different exposure time, normality and temperature of the acidic medium. The corrosion rates of composites were lower to that of matrix ZA-27 alloy under all conditions.

Keywords: autoclave, MMC’s, stress corrosion, vortex method

Procedia PDF Downloads 454
19000 Pricing European Continuous-Installment Options under Regime-Switching Models

Authors: Saghar Heidari

Abstract:

In this paper, we study the valuation problem of European continuous-installment options under Markov-modulated models with a partial differential equation approach. Due to the opportunity for continuing or stopping to pay installments, the valuation problem under regime-switching models can be formulated as coupled partial differential equations (CPDE) with free boundary features. To value the installment options, we express the truncated CPDE as a linear complementarity problem (LCP), then a finite element method is proposed to solve the resulted variational inequality. Under some appropriate assumptions, we establish the stability of the method and illustrate some numerical results to examine the rate of convergence and accuracy of the proposed method for the pricing problem under the regime-switching model.

Keywords: continuous-installment option, European option, regime-switching model, finite element method

Procedia PDF Downloads 118
18999 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 202
18998 Services-Oriented Model for the Regulation of Learning

Authors: Mohamed Bendahmane, Brahim Elfalaki, Mohammed Benattou

Abstract:

One of the major sources of learners' professional difficulties is their heterogeneity. Whether on cognitive, social, cultural or emotional level, learners being part of the same group have many differences. These differences do not allow to apply the same learning process at all learners. Thus, an optimal learning path for one, is not necessarily the same for the other. We present in this paper a model-oriented service to offer to each learner a personalized learning path to acquire the targeted skills.

Keywords: learning path, web service, trace analysis, personalization

Procedia PDF Downloads 335
18997 Divergence Regularization Method for Solving Ill-Posed Cauchy Problem for the Helmholtz Equation

Authors: Benedict Barnes, Anthony Y. Aidoo

Abstract:

A Divergence Regularization Method (DRM) is used to regularize the ill-posed Helmholtz equation where the boundary deflection is inhomogeneous in a Hilbert space H. The DRM incorporates a positive integer scaler which homogenizes the inhomogeneous boundary deflection in Cauchy problem of the Helmholtz equation. This ensures the existence, as well as, uniqueness of solution for the equation. The DRM restores all the three conditions of well-posedness in the sense of Hadamard.

Keywords: divergence regularization method, Helmholtz equation, ill-posed inhomogeneous Cauchy boundary conditions

Procedia PDF Downloads 170
18996 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis

Authors: Afef Dridi, Salah Mezlini

Abstract:

Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.

Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method

Procedia PDF Downloads 246
18995 A New Method to Winner Determination for Economic Resource Allocation in Cloud Computing Systems

Authors: Ebrahim Behrouzian Nejad, Rezvan Alipoor Sabzevari

Abstract:

Cloud computing systems are large-scale distributed systems, so that they focus more on large scale resource sharing, cooperation of several organizations and their use in new applications. One of the main challenges in this realm is resource allocation. There are many different ways to resource allocation in cloud computing. One of the common methods to resource allocation are economic methods. Among these methods, the auction-based method has greater prominence compared with Fixed-Price method. The double combinatorial auction is one of the proper ways of resource allocation in cloud computing. This method includes two phases: winner determination and resource allocation. In this paper a new method has been presented to determine winner in double combinatorial auction-based resource allocation using Imperialist Competitive Algorithm (ICA). The experimental results show that in our new proposed the number of winner users is higher than genetic algorithm. On other hand, in proposed algorithm, the number of winner providers is higher in genetic algorithm.

Keywords: cloud computing, resource allocation, double auction, winner determination

Procedia PDF Downloads 341
18994 Land Use and Natal Multimammate Mouse Abundance in Lassa Fever Endemic Villages of Eastern Sierra Leone

Authors: J. T. Koininga, J. E. Teigen, A. Wilkinson, D. Kanneh, F. Kanneh, M. Foday, D. S. Grant, M. Leach, L. M. Moses

Abstract:

Lassa fever (LF) is a severe febrile illness endemic to West Africa. While human-to-human transmission occurs, evidence suggests most LF cases originate from exposure to rodents, particularly the Natal multimammate mouse, Mastomys natalensis. Within West Africa, LF occurs primarily in rural communities where agriculture is the main economic activity. Seasonality of LF has also been linked to agricultural cycles, with peak incidence occurring in the dry season when fields are burned and plowed. To investigate this pattern of seasonality, four agricultural communities were selected for this two-year longitudinal study. Each community was to be sampled four times each year, but this was interrupted by the Ebola virus disease outbreak. Agricultural land use, forested, and fallow areas were identified through participatory mapping. Transects were plotted in each area and Sherman traps were set for four nights. Captured small mammals were identified, ear tagged, and released. Mastomys natalensis abundance was found to be highest in areas of converted fallow land and rice swamps in the dry season and upland mixed crop areas toward the onset of the rainy season. All peak times were associated with heavy perturbation of soil. All ages and genders were present during these time points. These results suggest that peak abundance of the Mastomys natalensis in agricultural areas coincides with peak incidence of LF reported in this region. Although contact with rodents may be higher in villages, our study suggests human behaviors in agricultural areas may increase risk of transmission of Lassa virus.

Keywords: agriculture, land use, Lassa Fever, rodent abundance

Procedia PDF Downloads 98
18993 Methodologies for Stability Assessment of Existing and Newly Designed Reinforced Concrete Bridges

Authors: Marija Vitanovа, Igor Gjorgjiev, Viktor Hristovski, Vlado Micov

Abstract:

Evaluation of stability is very important in the process of definition of optimal structural measures for maintenance of bridge structures and their strengthening. To define optimal measures for their repair and strengthening, it is necessary to evaluate their static and seismic stability. Presented in this paper are methodologies for evaluation of the seismic stability of existing reinforced concrete bridges designed without consideration of seismic effects and checking of structural justification of newly designed bridge structures. All bridges are located in the territory of the Republic of North Macedonia. A total of 26 existing bridges of different structural systems have been analyzed. Visual inspection has been carried out for all bridges, along with the definition of three main damage categories according to which structures have been categorized in respect to the need for their repair and strengthening. Investigations involving testing the quality of the built-in materials have been carried out, and dynamic tests pointing to the dynamic characteristics of the structures have been conducted by use of non-destructive methods of ambient vibration measurements. The conclusions drawn from the performed measurements and tests have been used for the development of accurate mathematical models that have been analyzed for static and dynamic loads. Based on the geometrical characteristics of the cross-sections and the physical characteristics of the built-in materials, interaction diagrams have been constructed. These diagrams along with the obtained section quantities under seismic effects, have been used to obtain the bearing capacity of the cross-sections. The results obtained from the conducted analyses point to the need for the repair of certain structural parts of the bridge structures. They indicate that the stability of the superstructure elements is not critical during a seismic effect, unlike the elements of the sub-structure, whose strengthening is necessary.

Keywords: existing bridges, newly designed bridges, reinforced concrete bridges, stability assessment

Procedia PDF Downloads 85
18992 Key Parameters Analysis of the Stirring Systems in the Optmization Procedures

Authors: T. Gomes, J. Manzi

Abstract:

The inclusion of stirring systems in the calculation and optimization procedures has been undergone a significant lack of attention, what it can reflect in the results because such systems provide an additional energy to the process, besides promote a better distribution of mass and energy. This is meaningful for the reactive systems, particularly for the Continuous Stirred Tank Reactor (CSTR), for which the key variables and parameters, as well as the operating conditions of stirring systems, can play a pivotal role and it has been showed in the literature that neglect these factors can lead to sub-optimal results. It is also well known that the sole use of the First Law of Thermodynamics as an optimization tool cannot yield satisfactory results, since the joint use of the First and Second Laws condensed into a procedure so-called entropy generation minimization (EGM) has shown itself able to drive the system towards better results. Therefore, the main objective of this paper is to determine the effects of key parameters of the stirring system in the optimization procedures by means of EGM applied to the reactive systems. Such considerations have been possible by dimensional analysis according to Rayleigh and Buckingham's method, which takes into account the physical and geometric parameters and the variables of the reactive system. For the simulation purpose based on the production of propylene glycol, the results have shown a significant increase in the conversion rate from 36% (not-optimized system) to 95% (optimized system) with a consequent reduction of by-products. In addition, it has been possible to establish the influence of the work of the stirrer in the optimization procedure, in which can be described as a function of the fluid viscosity and consequently of the temperature. The conclusions to be drawn also indicate that the use of the entropic analysis as optimization tool has been proved to be simple, easy to apply and requiring low computational effort.

Keywords: stirring systems, entropy, reactive system, optimization

Procedia PDF Downloads 230
18991 Determination of the Oxidative Potential of Organic Materials: Method Development

Authors: Jui Afrin, Akhtarul Islam

Abstract:

In this paper, the solution of glucose, yeast and glucose yeast mixture are being used as sample solution for determining the chemical oxygen demand (COD). In general COD determination method used to determine the different rang of oxidative potential. But in this work has shown to determine the definite oxidative potential for different concentration for known COD value and wanted to see the difference between experimental value and the theoretical value for evaluating the method drawbacks. In this study, made the values of oxidative potential like 400 mg/L, 500 mg/L, 600 mg/L, 700 mg/L and 800mg/L for various sample solutions and determined the oxidative potential according to our developed method. Plotting the experimental COD values vs. sample solutions of various concentrations in mg/L to draw the curve. From these curves see that the curves for glucose solution is not linear; its deviate from linearity for the lower concentration and the reason for this deviation is unknown. If these drawback can be removed this method can be effectively used to determine Oxidative Potential of Industrial wastewater (such as: Leather industry wastewater, Municipal wastewater, Food industry wastewater, Textile wastewater, Pharmaceuticals waste water) that’s why more experiment and study required.

Keywords: bod (biological oxygen demand), cod (chemical oxygen demand), oxidative potential, titration, waste water, development

Procedia PDF Downloads 214
18990 Numerical Modeling of Wave Run-Up in Shallow Water Flows Using Moving Wet/Dry Interfaces

Authors: Alia Alghosoun, Michael Herty, Mohammed Seaid

Abstract:

We present a new class of numerical techniques to solve shallow water flows over dry areas including run-up. Many recent investigations on wave run-up in coastal areas are based on the well-known shallow water equations. Numerical simulations have also performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of coastal areas. In all these simulations the shallow water equations are solved in entire domain including dry areas and special treatments are used for numerical solution of singularities at these dry regions. In the present study we propose a new method to deal with these difficulties by reformulating the shallow water equations into a new system to be solved only in the wetted domain. The system is obtained by a change in the coordinates leading to a set of equations in a moving domain for which the wet/dry interface is the reconstructed using the wave speed. To solve the new system we present a finite volume method of Lax-Friedrich type along with a modified method of characteristics. The method is well-balanced and accurately resolves dam-break problems over dry areas.

Keywords: dam-break problems, finite volume method, run-up waves, shallow water flows, wet/dry interfaces

Procedia PDF Downloads 128