Search results for: dense discrete phase model (DDPM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20909

Search results for: dense discrete phase model (DDPM)

18899 Bi-Axial Stress Effects on Barkhausen-Noise

Authors: G. Balogh, I. A. Szabó, P.Z. Kovács

Abstract:

Mechanical stress has a strong effect on the magnitude of the Barkhausen-noise in structural steels. Because the measurements are performed at the surface of the material, for a sample sheet, the full effect can be described by a biaxial stress field. The measured Barkhausen-noise is dependent on the orientation of the exciting magnetic field relative to the axis of the stress tensor. The sample inhomogenities including the residual stress also modifies the angular dependence of the measured Barkhausen-noise. We have developed a laboratory device with a cross like specimen for bi-axial bending. The measuring head allowed performing excitations in two orthogonal directions. We could excite the two directions independently or simultaneously with different amplitudes. The simultaneous excitation of the two coils could be performed in phase or with a 90 degree phase shift. In principle this allows to measure the Barkhausen-noise at an arbitrary direction without moving the head, or to measure the Barkhausen-noise induced by a rotating magnetic field if a linear superposition of the two fields can be assumed.

Keywords: Barkhausen-noise, bi-axial stress, stress measuring, stress dependency

Procedia PDF Downloads 296
18898 Design of Intelligent Scaffolding Learning Management System for Vocational Education

Authors: Seree Chadcham, Niphon Sukvilai

Abstract:

This study is the research and development which is intended to: 1) design of the Intelligent Scaffolding Learning Management System (ISLMS) for vocational education, 2) assess the suitability of the Design of Intelligent Scaffolding Learning Management System for Vocational Education. Its methods are divided into 2 phases. Phase 1 is the design of the ISLMS for Vocational Education and phase 2 is the assessment of the suitability of the design. The samples used in this study are work done by 15 professionals in the field of Intelligent Scaffolding, Learning Management System, Vocational Education, and Information and Communication Technology in education selected using the purposive sampling method. Data analyzed by arithmetic mean and standard deviation. The results showed that the ISLMS for vocational education consists of 2 main components which are: 1) the Intelligent Learning Management System for Vocational Education, 2) the Intelligent Scaffolding Management System. The result of the system suitability assessment from the professionals is in the highest range.

Keywords: intelligent, scaffolding, learning management system, vocational education

Procedia PDF Downloads 795
18897 Starting Characteristic Analysis of LSPM for Pumping System Considering Demagnetization

Authors: Subrato Saha, Yun-Hyun Cho

Abstract:

This paper presents the design process of a high performance 3-phase 3.7 kW 2-pole line start permanent magnet synchronous motor for pumping system. A method was proposed to study the starting torque characteristics considering line start with high inertia load. A d-q model including cage was built to study the synchronization capability. Time-stepping finite element method analysis was utilized to accurately predict the dynamic and transient performance, efficiency, starting current, speed curve and, etc. Considering the load torque of pumps during starting stage, the rotor bar was designed with minimum demagnetization of permanent magnet caused by huge starting current.

Keywords: LSPM, starting analysis, demagnetization, FEA, pumping system

Procedia PDF Downloads 471
18896 Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models

Authors: Do-Jin Jang, Sung-Ah Kim

Abstract:

In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.

Keywords: design media, kinetic facades, tangible user interface, 3D scanning

Procedia PDF Downloads 413
18895 A Large Language Model-Driven Method for Automated Building Energy Model Generation

Authors: Yake Zhang, Peng Xu

Abstract:

The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.

Keywords: artificial intelligence, building energy modelling, building simulation, large language model

Procedia PDF Downloads 26
18894 An Improved Model of Estimation Global Solar Irradiation from in situ Data: Case of Oran Algeria Region

Authors: Houcine Naim, Abdelatif Hassini, Noureddine Benabadji, Alex Van Den Bossche

Abstract:

In this paper, two models to estimate the overall monthly average daily radiation on a horizontal surface were applied to the site of Oran (35.38 ° N, 0.37 °W). We present a comparison between the first one is a regression equation of the Angstrom type and the second model is developed by the present authors some modifications were suggested using as input parameters: the astronomical parameters as (latitude, longitude, and altitude) and meteorological parameters as (relative humidity). The comparisons are made using the mean bias error (MBE), root mean square error (RMSE), mean percentage error (MPE), and mean absolute bias error (MABE). This comparison shows that the second model is closer to the experimental values that the model of Angstrom.

Keywords: meteorology, global radiation, Angstrom model, Oran

Procedia PDF Downloads 233
18893 Expert Review on Conceptual Design Model of Assistive Courseware for Low Vision (AC4LV) Learners

Authors: Nurulnadwan Aziz, Ariffin Abdul Mutalib, Siti Mahfuzah Sarif

Abstract:

This paper reports an ongoing project regarding the development of Conceptual Design Model of Assistive Courseware for Low Vision (AC4LV) learners. Having developed the intended model, it has to be validated prior to producing it as guidance for the developers to develop an AC4LV. This study requires two phases of validation process which are through expert review and prototyping method. This paper presents a part of the validation process which is findings from experts review on Conceptual Design Model of AC4LV which has been carried out through a questionnaire. Results from 12 international and local experts from various respectable fields in Human-Computer Interaction (HCI) were discussed and justified. In a nutshell, reviewed Conceptual Design Model of AC4LV was formed. Future works of this study are to validate the reviewed model through prototyping method prior to testing it to the targeted users.

Keywords: assistive courseware, conceptual design model, expert review, low vision learners

Procedia PDF Downloads 546
18892 Development of the Religious Out-Group Aggression Scale

Authors: Rylle Evan Gabriel Zamora, Micah Dennise Malia, Abygail Deniese Villabona

Abstract:

When examining studies on aggression, the studies about individual aggression vastly outnumbers those studies on group aggression. Given the nature of aggression to be violent and cyclical, and the amount violent events that have occurred in the near present, the study of group aggression is relevant now more than ever. This discrepancy is parallel with the number of valid and reliable psychological tests that measure group aggression. Throughout history, one of the biggest causes of group based violence and aggression is religion. This is particularly true within the context of the Philippines as there are a large number of religious groups. Thus, this study aimed to develop a standardized test that measures an individual’s tendency to be aggressive to those who are in conflict with his or her religious beliefs. This study employs a test development design that employs a qualitative phase to ensure the validity of the scale. Thus, the study was divided into three phases. First is a pilot test wherein an instrument was designed from existing literature which was then administered to 173 respondents from the four largest religious groups in the Philippines. After extensive factor analysis and reliability testing, new items were then formed from the qualitative data collected from eight participants, consisting of two individuals per religious group. The final testing integrates all statistically significant items from the first phase, and the newly formed items from the second phase, which was then administered to 200 respondents. The results were then tested further for reliability using Cronbach’s alpha and validity through factor analysis. The items that were proven to be significant were then combined to create a final instrument that may be used by future studies.

Keywords: religious aggression, group aggression, test development, psychological assessment, social psychology

Procedia PDF Downloads 295
18891 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 40
18890 Creation of Ultrafast Ultra-Broadband High Energy Laser Pulses

Authors: Walid Tawfik

Abstract:

The interaction of high intensity ultrashort laser pulses with plasma generates many significant applications, including soft x-ray lasers, time-resolved laser induced plasma spectroscopy LIPS, and laser-driven accelerators. The development in producing of femtosecond down to ten femtosecond optical pulses has facilitates scientists with a vital tool in a variety of ultrashort phenomena, such as high field physics, femtochemistry and high harmonic generation HHG. In this research, we generate a two-octave-wide ultrashort supercontinuum pulses with an optical spectrum extending from 3.5 eV (ultraviolet) to 1.3 eV (near-infrared) using a capillary fiber filled with neon gas. These pulses are formed according to nonlinear self-phase modulation in the neon gas as a nonlinear medium. The investigations of the created pulses were made using spectral phase interferometry for direct electric-field reconstruction (SPIDER). A complete description of the output pulses was considered. The observed characterization of the produced pulses includes the beam profile, the pulse width, and the spectral bandwidth. After reaching optimization conditions, the intensity of the reconstructed pulse autocorrelation function was applied for the shorts pulse duration to achieve transform limited ultrashort pulses with durations below 6-fs energies up to 600μJ. Moreover, the effect of neon pressure variation on the pulse width was examined. The nonlinear self-phase modulation realized to be increased with the pressure of the neon gas. The observed results may lead to an advanced method to control and monitor ultrashort transit interaction in femtochemistry.

Keywords: supercontinuum, ultrafast, SPIDER, ultra-broadband

Procedia PDF Downloads 224
18889 Channels Splitting Strategy for Optical Local Area Networks of Passive Star Topology

Authors: Peristera Baziana

Abstract:

In this paper, we present a network configuration for a WDM LANs of passive star topology that assume that the set of data WDM channels is split into two separate sets of channels, with different access rights over them. Especially, a synchronous transmission WDMA access algorithm is adopted in order to increase the probability of successful transmission over the data channels and consequently to reduce the probability of data packets transmission cancellation in order to avoid the data channels collisions. Thus, a control pre-transmission access scheme is followed over a separate control channel. An analytical Markovian model is studied and the average throughput is mathematically derived. The performance is studied for several numbers of data channels and various values of control phase duration.

Keywords: access algorithm, channels division, collisions avoidance, wavelength division multiplexing

Procedia PDF Downloads 296
18888 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study

Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod

Abstract:

Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive vol­ume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of devel­oped beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB compo­nent module) placed at the end of the electron applicator, was done using previously vali­dated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves re­sulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV elec­tron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junc­tion of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.

Keywords: abutting fields, electron beam, radiation therapy, spoilers

Procedia PDF Downloads 176
18887 Investigation of a Natural Convection Heat Sink for LEDs Based on Micro Heat Pipe Array-Rectangular Channel

Authors: Wei Wang, Yaohua Zhao, Yanhua Diao

Abstract:

The exponential growth of the lighting industry has rendered traditional thermal technologies inadequate for addressing the thermal management challenges inherent to high-power light-emitting diode (LED) technology. To enhance the thermal management of LEDs, this study proposes a heat sink configuration that integrates a miniature heat pipe array based on phase change technology with rectangular channels. The thermal performance of the heat sink was evaluated through experimental testing, and the results demonstrated that when the input power was 100W, 150W, and 200W, the temperatures of the LED substrate were 47.64℃, 56.78℃, and 69.06℃, respectively. Additionally, the maximum temperature difference of the MHPA in the vertical direction was observed to be 0.32℃, 0.30℃, and 0.30℃, respectively. The results demonstrate that the heat sink not only effectively dissipates the heat generated by the LEDs, but also exhibits excellent temperature uniformity. In consideration of the experimental measurement outcomes, a corresponding numerical model was developed as part of this study. Following the model validation, the effect of the structural parameters of the heat sink on its heat dissipation efficacy was examined through the use of response surface methodology (RSM) analysis. The rectangular channel width, channel height, channel length, number of channel cross-sections, and channel cross-section spacing were selected as the input parameters, while the LED substrate temperature and the total mass of the heat sink were regarded as the response variables. Subsequently, the response was subjected to an analysis of variance (ANOVA), which yielded a regression model that predicted the response based on the input variables. This offers some direction for the design of the radiator.

Keywords: light-emitting diodes, heat transfer, heat pipe, natural convection, response surface methodology

Procedia PDF Downloads 35
18886 Application of Model Tree in the Prediction of TBM Rate of Penetration with Synthetic Minority Oversampling Technique

Authors: Ehsan Mehryaar

Abstract:

The rate of penetration is (RoP) one of the vital factors in the cost and time of tunnel boring projects; therefore, predicting it can lead to a substantial increase in the efficiency of the project. RoP is heavily dependent geological properties of the project site and TBM properties. In this study, 151-point data from Queen’s water tunnel is collected, which includes unconfined compression strength, peak slope index, angle with weak planes, and distance between planes of weaknesses. Since the size of the data is small, it was observed that it is imbalanced. To solve that problem synthetic minority oversampling technique is utilized. The model based on the model tree is proposed, where each leaf consists of a support vector machine model. Proposed model performance is then compared to existing empirical equations in the literature.

Keywords: Model tree, SMOTE, rate of penetration, TBM(tunnel boring machine), SVM

Procedia PDF Downloads 174
18885 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 80
18884 Enhanced Visible-Light Photocatalytic Activity of TiO2 Doped in Degradation of Acid Dye

Authors: B. Benalioua, I. Benyamina, M. Mansour, A. Bentouami, B. Boury

Abstract:

The objective of this study is based on the synthesis of a new photocatalyst based on TiO2 and its application in the photo-degradation of an acid dye under the visible light. The material obtained was characterized by XRD, BET and UV- vis DRS. The photocatalytic efficiency of the Zn -Fe TiO2 treated at 500°C was tested on the Indigo Carmine under the irradiation of visible light and compared with that of the commercial titanium oxide TiO2-P25 (Degussa). The XRD characterization of the material Zn-Fe-TiO2 (500°C) revealed the presence of the anatase phase and the absence of the Rutile phase in comparison of the TiO2 P25 diffractogram. Characterization by UV-visible diffuse reflection material showed that the Fe-Zn-TiO2 exhibits redshift (move visible) relative to commercial titanium oxide TiO2-P25, this property promises a photocatalytic activity of Zn -Fe- TiO2 under visible light. Indeed, the efficiency of photocatalytic Fe-Zn-TiO2 as a visible light is shown by a complete discoloration of indigo carmine solution of 16 mg/L after 40 minutes, whereas with the P25-TiO2 discoloration is achieved after 90 minutes.

Keywords: POA, heterogeneous photocatalysis, TiO2, doping

Procedia PDF Downloads 415
18883 Optimal Relaxation Parameters for Obtaining Efficient Iterative Methods for the Solution of Electromagnetic Scattering Problems

Authors: Nadaniela Egidi, Pierluigi Maponi

Abstract:

The approximate solution of a time-harmonic electromagnetic scattering problem for inhomogeneous media is required in several application contexts, and its two-dimensional formulation is a Fredholm integral equation of the second kind. This integral equation provides a formulation for the direct scattering problem, but it has to be solved several times also in the numerical solution of the corresponding inverse scattering problem. The discretization of this Fredholm equation produces large and dense linear systems that are usually solved by iterative methods. In order to improve the efficiency of these iterative methods, we use the Symmetric SOR preconditioning, and we propose an algorithm for the evaluation of the associated relaxation parameter. We show the efficiency of the proposed algorithm by several numerical experiments, where we use two Krylov subspace methods, i.e., Bi-CGSTAB and GMRES.

Keywords: Fredholm integral equation, iterative method, preconditioning, scattering problem

Procedia PDF Downloads 104
18882 Microstructure and Sintering of Boron-Alloyed Martensitic Stainless Steel

Authors: Ming-Wei Wu, Yu-Jin Tsai, Ching-Huai Chang

Abstract:

Liquid phase sintering (LPS) is a versatile technique for achieving effective densification of powder metallurgy (PM) steels and other materials. The aim of this study was to examine the influences of 0.6 wt% boron on the microstructure and LPS behavior of boron-alloyed 410 martensitic stainless steel. The results showed that adding 0.6 wt% boron can obviously promote the LPS due to a eutectic reaction and increase the sintered density of 410 stainless steel. The density was much increased by 1.06 g/cm³ after 1225ºC sintering. Increasing the sintering temperature from 1225ºC to 1275ºC did not obviously improve the sintered density. After sintering at 1225ºC~1275ºC, the matrix was fully martensitic, and intragranular borides were extensively found due to the solidification of eutectic liquid. The microstructure after LPS consisted of the martensitic matrix and (Fe, Cr)2B boride, as identified by electron backscatter diffraction (EBSD) and electron probe micro-analysis (EPMA).

Keywords: powder metallurgy, liquid phase sintering, stainless steel, martensite, boron, microstructure

Procedia PDF Downloads 258
18881 An Agent-Based Modeling and Simulation of Human Muscle

Authors: Sina Saadati, Mohammadreza Razzazi

Abstract:

In this article, we have tried to present an agent-based model of human muscle. A suitable model of muscle is necessary for the analysis of mankind's movements. It can be used by clinical researchers who study the influence of motion sicknesses, like Parkinson's disease. It is also useful in the development of a prosthesis that receives the electromyography signals and generates force as a reaction. Since we have focused on computational efficiency in this research, the model can compute the calculations very fast. As far as it concerns prostheses, the model can be known as a charge-efficient method. In this paper, we are about to illustrate an agent-based model. Then, we will use it to simulate the human gait cycle. This method can also be done reversely in the analysis of gait in motion sicknesses.

Keywords: agent-based modeling and simulation, human muscle, gait cycle, motion sickness

Procedia PDF Downloads 114
18880 Reflections on Lyotard's Reading of the Kantian Sublime and Its Political Import

Authors: Tugba Ayas Onol

Abstract:

The paper revisits Jean-François Lyotard’s interpretation of the Kantian Sublime as a tool for understanding politics after modernity. In 1985 Lyotard announces the end of rational politics based on consensus and claims that new strategies are urged to recognize the political imperatives of marginalized groups. The charm of the sublime as a reflective judgment is grounded on the fact that the judgment of sublime is free from any notion of consensus or common sense in particular. Lyotard interprets this feature of the sublime as a respect for heterogeneity and for him aesthetic judgments can be a model for understanding justice in postmodern times, in which it seems hard to follow a single universal law among different phrase regimes. More importantly, the Kantian sublime speaks to what Lyotard addresses as the incommensurability of phase genres. The present paper shall try to evaluate Lyotard’s employment of the Kantian notion of the sublime in relation to its possible political import.

Keywords: Kant, Lyotard, sublime, politics

Procedia PDF Downloads 382
18879 Study of Superconducting Patch Printed on Electric-Magnetic Substrates Materials

Authors: Fortaki Tarek, S. Bedra

Abstract:

In this paper, the effects of both uniaxial anisotropy in the substrate and high Tc superconducting patch on the resonant frequency, half-power bandwidth, and radiation patterns are investigated using an electric field integral equation and the spectral domain Green’s function. The analysis has been based on a full electromagnetic wave model with London’s equations and the Gorter-Casimir two-fluid model has been improved to investigate the resonant and radiation characteristics of high Tc superconducting rectangular microstrip patch in the case where the patch is printed on electric-magnetic uniaxially anisotropic substrate materials. The stationary phase technique has been used for computing the radiation electric field. The obtained results demonstrate a considerable improvement in the half-power bandwidth, of the rectangular microstrip patch, by using a superconductor patch instead of a perfect conductor one. Further results show that high Tc superconducting rectangular microstrip patch on the uniaxial substrate with properly selected electric and magnetic anisotropy ratios is more advantageous than the one on the isotropic substrate by exhibiting wider bandwidth and radiation characteristic. This behavior agrees with that discovered experimentally for superconducting patches on isotropic substrates. The calculated results have been compared with measured one available in the literature and excellent agreement has been found.

Keywords: high Tc superconducting microstrip patch, electric-magnetic anisotropic substrate, Galerkin method, surface complex impedance with boundary conditions, radiation patterns

Procedia PDF Downloads 444
18878 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms

Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy

Abstract:

Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.

Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu

Procedia PDF Downloads 487
18877 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 144
18876 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: bubbly flow, particle image velocimetry, two-phase flow, wake structures

Procedia PDF Downloads 377
18875 A High Step-Up DC-DC Converter for Renewable Energy System Applications

Authors: Sopida Vacharasukpo, Sudarat Khwan-On

Abstract:

This paper proposes a high step-up DC-DC converter topology for renewable energy system applications. The proposed converter employs only a single power switch instead of using several switches. Compared to the conventional DC-DC step-up converters the higher voltage gain with small output ripples can be achieved by using the proposed high step-up DC-DC converter topology. It can step up the low input voltage (20-50Vdc) generated from the photovoltaic modules to the high output voltage level approximately 600Vdc in order to supply the three-phase inverter fed the three-phase motor drive. In this paper, the operating principle of the proposed converter topology and its control strategy under the continuous conduction mode (CCM) are described. Finally, simulation results are shown to demonstrate the effectiveness of the proposed high step-up DC-DC converter with its control strategy to increase the voltage step-up conversion ratio.

Keywords: DC-DC converter, high step-up ratio, renewable energy, single switch

Procedia PDF Downloads 1193
18874 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 444
18873 The Use of Haar Wavelet Mother Signal Tool for Performance Analysis Response of Distillation Column (Application to Moroccan Case Study)

Authors: Mahacine Amrani

Abstract:

This paper aims at reviewing some Moroccan industrial applications of wavelet especially in the dynamic identification of a process model using Haar wavelet mother response. Two recent Moroccan study cases are described using dynamic data originated by a distillation column and an industrial polyethylene process plant. The purpose of the wavelet scheme is to build on-line dynamic models. In both case studies, a comparison is carried out between the Haar wavelet mother response model and a linear difference equation model. Finally it concludes, on the base of the comparison of the process performances and the best responses, which may be useful to create an estimated on-line internal model control and its application towards model-predictive controllers (MPC). All calculations were implemented using AutoSignal Software.

Keywords: process performance, model, wavelets, Haar, Moroccan

Procedia PDF Downloads 317
18872 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
18871 High-Pressure Calculations of the Elastic Properties of ZnSx Se 1−x Alloy in the Virtual-Crystal Approximation

Authors: N. Lebga, Kh. Bouamama, K. Kassali

Abstract:

We report first-principles calculation results on the structural and elastic properties of ZnS x Se1−x alloy for which we employed the virtual crystal approximation provided with the ABINIT program. The calculations done using density functional theory within the local density approximation and employing the virtual-crystal approximation, we made a comparative study between the numerical results obtained from ab-initio calculation using ABINIT or Wien2k within the Density Functional Theory framework with either Local Density Approximation or Generalized Gradient approximation and the pseudo-potential plane-wave method with the Hartwigzen Goedecker Hutter scheme potentials. It is found that the lattice parameter, the phase transition pressure, and the elastic constants (and their derivative with respect to the pressure) follow a quadratic law in x. The variation of the elastic constants is also numerically studied and the phase transformations are discussed in relation to the mechanical stability criteria.

Keywords: density functional theory, elastic properties, ZnS, ZnSe,

Procedia PDF Downloads 574
18870 Characterization and Modelling of Aerosol Droplet in Absorption Columns

Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem. Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.

Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation

Procedia PDF Downloads 246