Search results for: single layer model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21634

Search results for: single layer model

20194 An Approach of Node Model TCnNet: Trellis Coded Nanonetworks on Graphene Composite Substrate

Authors: Diogo Ferreira Lima Filho, José Roberto Amazonas

Abstract:

Nanotechnology opens the door to new paradigms that introduces a variety of novel tools enabling a plethora of potential applications in the biomedical, industrial, environmental, and military fields. This work proposes an integrated node model by applying the same concepts of TCNet to networks of nanodevices where the nodes are cooperatively interconnected with a low-complexity Mealy Machine (MM) topology integrating in the same electronic system the modules necessary for independent operation in wireless sensor networks (WSNs), consisting of Rectennas (RF to DC power converters), Code Generators based on Finite State Machine (FSM) & Trellis Decoder and On-chip Transmit/Receive with autonomy in terms of energy sources applying the Energy Harvesting technique. This approach considers the use of a Graphene Composite Substrate (GCS) for the integrated electronic circuits meeting the following characteristics: mechanical flexibility, miniaturization, and optical transparency, besides being ecological. In addition, graphene consists of a layer of carbon atoms with the configuration of a honeycomb crystal lattice, which has attracted the attention of the scientific community due to its unique Electrical Characteristics.

Keywords: composite substrate, energy harvesting, finite state machine, graphene, nanotechnology, rectennas, wireless sensor networks

Procedia PDF Downloads 96
20193 Modeling of Sand Boil near the Danube River

Authors: Edina Koch, Károly Gombás, Márton Maller

Abstract:

The Little Plain is located along the Danube river, and this area is a “hotbed” of sand boil formation. This is due to the combination of a 100-250 m thick gravel layer beneath the Little Plain with a relatively thin blanket of poor soil spreading the gravel with variable thickness. Sand boils have a tradition and history in this area. It was possible to know which sand boil started and stopped working at what water level, and some of them even have names. The authors present a 2D finite element model of groundwater flow through a selected cross-section of the Danube river, which observed activation of piping phenomena during the 2013 flood event. Soil parametrization is based on a complex site investigation program conducted along the Danube River in the Little Plain.

Keywords: site characterization, groundwater flow, numerical modeling, sand boil

Procedia PDF Downloads 84
20192 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 109
20191 Effect of Loose Bonding and Corrugated Boundary Surface on Propagation of Rayleigh-Type Wave

Authors: Kshitish Ch. Mistri, Abhishek Kumar Singh

Abstract:

The effect of undulatory boundary surface of a medium as well as the degree of bonding between two consecutive mediums, on the propagation of surface waves is an unavoidable matter of fact. Therefore, this paper investigates the propagation of Rayleigh-type wave in a corrugated fibre-reinforced layer overlying an initially stressed orthotropic half-space under gravity. Also, the two mediums are assumed to be loosely (or imperfectly) bonded. Numerical computation of the obtained frequency equation has been carried out which aids to analyze the influence of corrugation, loose bonding, initial stress and gravity on the phase velocity of Rayleigh-type wave. Moreover, the presence and absence of corrugation, loose bonding and initial stress are also discussed in a comparative manner.

Keywords: corrugated boundary surface, fibre-reinforced layer, initial stress, loose bonding, orthotropic half-space, Rayleigh-type wave

Procedia PDF Downloads 269
20190 Design of a Controlled BHJ Solar Cell Using Modified Organic Vapor Spray Deposition Technique

Authors: F. Stephen Joe, V. Sathya Narayanan, V. R. Sanal Kumar

Abstract:

A comprehensive review of the literature on photovoltaic cells has been carried out for exploring the better options for cost efficient technologies for future solar cell applications. Literature review reveals that the Bulk Heterojunction (BHJ) Polymer Solar cells offer special opportunities as renewable energy resources. It is evident from the previous studies that the device fabricated with TiOx layer shows better power conversion efficiency than that of the device without TiOx layer. In this paper, authors designed a controlled BHJ solar cell using a modified organic vapor spray deposition technique facilitated with a vertical-moving gun named as 'Stephen Joe Technique' for getting a desirable surface pattern over the substrate to improving its efficiency over the years for industrial applications. We comprehended that the efficient processing and the interface engineering of these solar cells could increase the efficiency up to 5-10 %.

Keywords: BHJ polymer solar cell, photovoltaic cell, solar cell, Stephen Joe technique

Procedia PDF Downloads 533
20189 Applying Renowned Energy Simulation Engines to Neural Control System of Double Skin Façade

Authors: Zdravko Eškinja, Lovre Miljanić, Ognjen Kuljača

Abstract:

This paper is an overview of simulation tools used to model specific thermal dynamics that occurs while controlling double skin façade. Research has been conducted on simplified construction with single zone where one side is glazed. Heat flow and temperature responses are simulated in three different simulation tools: IDA-ICE, EnergyPlus and HAMBASE. The excitation of observed system, used in all simulations, was a temperature step of exterior environment. Air infiltration, insulation and other disturbances are excluded from this research. Although such isolated behaviour is not possible in reality, experiments are carried out to gain novel information about heat flow transients which are not observable under regular conditions. Results revealed new possibilities for adapting the parameters of the neural network regulator. Along numerical simulations, the same set-up has been also tested in a real-time experiment with a 1:18 scaled model and thermal chamber. The comparison analysis brings out interesting conclusion about simulation accuracy in this particular case.

Keywords: double skin façade, experimental tests, heat control, heat flow, simulated tests, simulation tools

Procedia PDF Downloads 224
20188 An Elbow Biomechanical Model and Its Coefficients Adjustment

Authors: Jie Bai, Yongsheng Gao, Shengxin Wang, Jie Zhao

Abstract:

Through the establishment of the elbow biomechanical model, it can provide theoretical guide for rehabilitation therapy on the upper limb of the human body. A biomechanical model of the elbow joint can be built by the connection of muscle force model and elbow dynamics. But there are many undetermined coefficients in the model like the optimal joint angle and optimal muscle force which are usually specified as the experimental parameters of other workers. Because of the individual differences, there is a certain deviation of the final result. To this end, the RMS value of the deviation between the actual angle and calculated angle is considered. A set of coefficients which lead to the minimum RMS value will be chosen to be the optimal parameters. The direct search method and the conjugacy search method are used to get the optimal parameters, thus the model can be more accurate and mode adaptability.

Keywords: elbow biomechanical model, RMS, direct search, conjugacy search

Procedia PDF Downloads 534
20187 Forecasting for Financial Stock Returns Using a Quantile Function Model

Authors: Yuzhi Cai

Abstract:

In this paper, we introduce a newly developed quantile function model that can be used for estimating conditional distributions of financial returns and for obtaining multi-step ahead out-of-sample predictive distributions of financial returns. Since we forecast the whole conditional distributions, any predictive quantity of interest about the future financial returns can be obtained simply as a by-product of the method. We also show an application of the model to the daily closing prices of Dow Jones Industrial Average (DJIA) series over the period from 2 January 2004 - 8 October 2010. We obtained the predictive distributions up to 15 days ahead for the DJIA returns, which were further compared with the actually observed returns and those predicted from an AR-GARCH model. The results show that the new model can capture the main features of financial returns and provide a better fitted model together with improved mean forecasts compared with conventional methods. We hope this talk will help audience to see that this new model has the potential to be very useful in practice.

Keywords: DJIA, financial returns, predictive distribution, quantile function model

Procedia PDF Downloads 358
20186 Surveying Energy Dissipation in Stepped Spillway Using Finite Element Modeling

Authors: Mehdi Fuladipanah

Abstract:

Stepped spillway includes several steps from the crest to the toe. The steps of stepped spillway could cause to decrease the energy with making energy distribution in the longitude mode and also to reduce the outcome speed. The aim of this study was to stimulate the stepped spillway combined with stilling basin-step using Fluent model and the turbulent superficial flow using RNG, K-ε. The free surface of the flow was monitored by VOF model. The velocity and the depth of the flow were measured by tail water depth by the numerical model and then the dissipated energy was calculated along the spillway. The results indicated that the stilling basin-step complex may cause energy dissipation increment in the stepped spillway. Also, the numerical model was suggested as an effective method to predict the circular and complicated flows in the stepped spillways.

Keywords: stepped spillway, fluent model, VOF model, K-ε model, energy distribution

Procedia PDF Downloads 363
20185 Robotic Assisted vs Traditional Laparoscopic Partial Nephrectomy Peri-Operative Outcomes: A Comparative Single Surgeon Study

Authors: Gerard Bray, Derek Mao, Arya Bahadori, Sachinka Ranasinghe

Abstract:

The EAU currently recommends partial nephrectomy as the preferred management for localised cT1 renal tumours, irrespective of surgical approach. With the advent of robotic assisted partial nephrectomy, there is growing evidence that warm ischaemia time may be reduced compared to the traditional laparoscopic approach. There is still no clear differences between the two approaches with regards to other peri-operative and oncological outcomes. Current limitations in the field denote the lack of single surgeon series to compare the two approaches as other studies often include multiple operators of different experience levels. To the best of our knowledge, this study is the first single surgeon series comparing peri-operative outcomes of robotic assisted and laparoscopic PN. The current study aims to reduce intra-operator bias while maintaining an adequate sample size to assess the differences in outcomes between the two approaches. We retrospectively compared patient demographics, peri-operative outcomes, and renal function derangements of all partial nephrectomies undertaken by a single surgeon with experience in both laparoscopic and robotic surgery. Warm ischaemia time, length of stay, and acute renal function deterioration were all significantly reduced with robotic partial nephrectomy, compared to laparoscopic nephrectomy. This study highlights the benefits of robotic partial nephrectomy. Further prospective studies with larger sample sizes would be valuable additions to the current literature.

Keywords: partial nephrectomy, robotic assisted partial nephrectomy, warm ischaemia time, peri-operative outcomes

Procedia PDF Downloads 133
20184 Correction Factor to Enhance the Non-Standard Hammer Effect Used in Standard Penetration Test

Authors: Khaled R. Khater

Abstract:

The weight of the SPT hammer is standard (0.623kN). The locally manufacturer drilling rigs use hammers, sometimes deviating off the standard weight. This affects the field measured blow counts (Nf) consequentially, affecting most of correlations previously obtained, as they were obtained based on standard hammer weight. The literature presents energy corrections factor (η2) to be applied to the SPT total input energy. This research investigates the effect of the hammer weight variation, as a single parameter, on the field measured blow counts (Nf). The outcome is a correction factor (ηk), equation, and correction chart. They are recommended to adjust back the measured misleading (Nf) to the standard one as if the standard hammer is used. This correction is very important to be done in such cases where a non-standard hammer is being used because the bore logs in any geotechnical report should contain true and representative values (Nf), let alone the long records of correlations, already in hand. The study here-in is achieved by using laboratory physical model to simulate the SPT dripping hammer mechanism. It is designed to allow different hammer weights to be used. Also, it is manufactured to avoid and eliminate the energy loss sources. This produces a transmitted efficiency up to 100%.

Keywords: correction factors, hammer weight, physical model, standard penetration test

Procedia PDF Downloads 376
20183 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 148
20182 Identification of Risks Associated with Process Automation Systems

Authors: J. K. Visser, H. T. Malan

Abstract:

A need exists to identify the sources of risks associated with the process automation systems within petrochemical companies or similar energy related industries. These companies use many different process automation technologies in its value chain. A crucial part of the process automation system is the information technology component featuring in the supervisory control layer. The ever-changing technology within the process automation layers and the rate at which it advances pose a risk to safe and predictable automation system performance. The age of the automation equipment also provides challenges to the operations and maintenance managers of the plant due to obsolescence and unavailability of spare parts. The main objective of this research was to determine the risk sources associated with the equipment that is part of the process automation systems. A secondary objective was to establish whether technology managers and technicians were aware of the risks and share the same viewpoint on the importance of the risks associated with automation systems. A conceptual model for risk sources of automation systems was formulated from models and frameworks in literature. This model comprised six categories of risk which forms the basis for identifying specific risks. This model was used to develop a questionnaire that was sent to 172 instrument technicians and technology managers in the company to obtain primary data. 75 completed and useful responses were received. These responses were analyzed statistically to determine the highest risk sources and to determine whether there was difference in opinion between technology managers and technicians. The most important risks that were revealed in this study are: 1) the lack of skilled technicians, 2) integration capability of third-party system software, 3) reliability of the process automation hardware, 4) excessive costs pertaining to performing maintenance and migrations on process automation systems, and 5) requirements of having third-party communication interfacing compatibility as well as real-time communication networks.

Keywords: distributed control system, identification of risks, information technology, process automation system

Procedia PDF Downloads 129
20181 Long-Baseline Single-epoch RTK Positioning Method Based on BDS-3 and Galileo Penta-Frequency Ionosphere-Reduced Combinations

Authors: Liwei Liu, Shuguo Pan, Wang Gao

Abstract:

In order to take full advantages of the BDS-3 penta-frequency signals in the long-baseline RTK positioning, a long-baseline RTK positioning method based on the BDS-3 penta-frequency ionospheric-reduced (IR) combinations is proposed. First, the low noise and weak ionospheric delay characteristics of the multi-frequency combined observations of BDS-3is analyzed. Second, the multi-frequency extra-wide-lane (EWL)/ wide-lane (WL) combinations with long-wavelengths are constructed. Third, the fixed IR EWL combinations are used to constrain the IR WL, then constrain narrow-lane (NL)ambiguityies and start multi-epoch filtering. There is no need to consider the influence of ionospheric parameters in the third step. Compared with the estimated ionospheric model, the proposed method reduces the number of parameters by half, so it is suitable for the use of multi-frequency and multi-system real-time RTK. The results using real data show that the stepwise fixed model of the IR EWL/WL/NL combinations can realize long-baseline instantaneous cimeter-level positioning.

Keywords: penta-frequency, ionospheric-reduced (IR), RTK positioning, long-baseline

Procedia PDF Downloads 150
20180 Molecular Pathogenesis of NASH through the Dysregulation of Metabolic Organ Network in the NASH-HCC Model Mouse Treated with Streptozotocin-High Fat Diet

Authors: Bui Phuong Linh, Yuki Sakakibara, Ryuto Tanaka, Elizabeth H. Pigney, Taishi Hashiguchi

Abstract:

NASH is an increasingly prevalent chronic liver disease that can progress to hepatocellular carcinoma and now is attracting interest worldwide. The STAM™ model is a clinically-correlated murine NASH model which shows the same pathological progression as NASH patients and has been widely used for pharmacological and basic research. The multiple parallel hits hypothesis suggests abnormalities in adipocytokines, intestinal microflora, and endotoxins are intertwined and could contribute to the development of NASH. In fact, NASH patients often exhibit gut dysbiosis and dysfunction in adipose tissue and metabolism. However, the analysis of the STAM™ model has only focused on the liver. To clarify whether the STAM™ model can also mimic multiple pathways of NASH progression, we analyzed the organ crosstalk interactions between the liver and the gut and the phenotype of adipose tissue in the STAM™ model. NASH was induced in male mice by a single subcutaneous injection of 200 µg streptozotocin 2 days after birth and feeding with high-fat diet after 4 weeks of age. The mice were sacrificed at NASH stage. Colon samples were snap-frozen in liquid nitrogen and stored at -80˚C for tight junction-related protein analysis. Adipose tissue was prepared into paraffin blocks for HE staining. Blood adiponectin was analyzed to confirm changes in the adipocytokine profile. Tight junction-related proteins in the intestine showed that expression of ZO-1 decreased with the progression of the disease. Increased expression of endotoxin in the blood and decreased expression of Adiponectin were also observed. HE staining revealed hypertrophy of adipocytes. Decreased expression of ZO-1 in the intestine of STAM™ mice suggests the occurrence of leaky gut, and abnormalities in adipocytokine secretion were also observed. Together with the liver, phenotypes in these organs are highly similar to human NASH patients and might be involved in the pathogenesis of NASH.

Keywords: Non-alcoholic steatohepatitis, hepatocellular carcinoma, fibrosis, organ crosstalk, leaky gut

Procedia PDF Downloads 151
20179 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics

Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah

Abstract:

Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.

Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics

Procedia PDF Downloads 118
20178 Monitoring Three-Dimensional Models of Tree and Forest by Using Digital Close-Range Photogrammetry

Authors: S. Y. Cicekli

Abstract:

In this study, tree-dimensional model of tree was created by using terrestrial close range photogrammetry. For this close range photos were taken. Photomodeler Pro 5 software was used for camera calibration and create three-dimensional model of trees. In first test, three-dimensional model of a tree was created, in the second test three-dimensional model of three trees were created. This study aim is creating three-dimensional model of trees and indicate the use of close-range photogrammetry in forestry. At the end of the study, three-dimensional model of tree and three trees were created. This study showed that usability of close-range photogrammetry for monitoring tree and forests three-dimensional model.

Keywords: close- range photogrammetry, forest, tree, three-dimensional model

Procedia PDF Downloads 381
20177 A Mathematical-Based Formulation of EEG Fluctuations

Authors: Razi Khalafi

Abstract:

Brain is the information processing center of the human body. Stimuli in form of information are transferred to the brain and then brain makes the decision on how to respond to them. In this research we propose a new partial differential equation which analyses the EEG signals and make a relationship between the incoming stimuli and the brain response to them. In order to test the proposed model, a set of external stimuli applied to the model and the model’s outputs were checked versus the real EEG data. The results show that this model can model the EEG signal well. The proposed model is useful not only for modeling of the EEG signal in case external stimuli but it can be used for the modeling of brain response in case of internal stimuli.

Keywords: Brain, stimuli, partial differential equation, response, eeg signal

Procedia PDF Downloads 420
20176 Performance and Availability Analysis of 2N Redundancy Models

Authors: Yutae Lee

Abstract:

In this paper, we consider the performance and availability of a redundancy model. The redundancy model is a form of resilience that ensures service availability in the event of component failure. This paper considers a 2N redundancy model. In the model there are at most one active service unit and at most one standby service unit. The active one is providing the service while the standby is prepared to take over the active role when the active fails. We design our analysis model using Stochastic Reward Nets, and then evaluate the performance and availability of 2N redundancy model using Stochastic Petri Net Package (SPNP).

Keywords: availability, performance, stochastic reward net, 2N redundancy

Procedia PDF Downloads 406
20175 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 147
20174 Numerical Simulation of Plasma Actuator Using OpenFOAM

Authors: H. Yazdani, K. Ghorbanian

Abstract:

This paper deals with modeling and simulation of the plasma actuator with OpenFOAM. Plasma actuator is one of the newest devices in flow control techniques which can delay separation by inducing external momentum to the boundary layer of the flow. The effects of the plasma actuators on the external flow are incorporated into Navier-Stokes computations as a body force vector which is obtained as a product of the net charge density and the electric field. In order to compute this body force vector, the model solves two equations: One for the electric field due to the applied AC voltage at the electrodes and the other for the charge density representing the ionized air. The simulation result is compared to the experimental and typical values which confirms the validity of the modeling.

Keywords: active flow control, flow-field, OpenFOAM, plasma actuator

Procedia PDF Downloads 299
20173 One or More Building Information Modeling Managers in France: The Confusion of the Kind

Authors: S. Blanchard, D. Beladjine, K. Beddiar

Abstract:

Since 2015, the arrival of BIM in the building sector in France has turned the corporation world upside down. Not only constructive practices have been impacted, but also the uses and the men who have undergone important changes. Thus, the new collaborative mode generated by the BIM and the digital model has challenged the supremacy of some construction actors because the process involves working together taking into account the needs of other contributors. New BIM tools have emerged and actors in the act of building must take ownership of them. It is in this context that under the impetus of a European directive and the French government's encouragement of new missions and job profiles have. Moreover, concurrent engineering requires that each actor can advance at the same time as the others, at the whim of the information that reaches him, and the information he has to transmit. However, in the French legal system around public procurement, things are not planned in this direction. Also, a consequent evolution must take place to adapt to the methodology. The new missions generated by the BIM in France require a good mastery of the tools and the process. Also, to meet the objectives of the BIM approach, it is possible to define a typical job profile around the BIM, adapted to the various sectors concerned. The multitude of job offers using the same terms with very different objectives and the complexity of the proposed missions motivated by our approach. In order to reinforce exchanges with professionals or specialists, we carried out a statistical study to answer this problem. Five topics are discussed around the business area: the BIM in the company, the function (business), software used and BIM missions practiced (39 items). About 1400 professionals were interviewed. These people work in companies (micro businesses, SMEs, and Groups) of construction, engineering offices or, architectural agencies. 77% of respondents have the status of employees. All participants are graduated in their trade, the majority having level 1. Most people have less than a year of experience in BIM, but some have 10 years. The results of our survey help to understand why it is not possible to define a single type of BIM Manager. Indeed, the specificities of the companies are so numerous and complex and the missions so varied, that there is not a single model for a function. On the other hand, it was possible to define 3 main professions around the BIM (Manager, Coordinator and Modeler) and 3 main missions for the BIM Manager (deployment of the method, assistance to project management and management of a project).

Keywords: BIM manager, BIM modeler, BIM coordinator, project management

Procedia PDF Downloads 156
20172 A Study on Stochastic Integral Associated with Catastrophes

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

We analyze stochastic integrals associated with a mutation process. To be specific, we describe the cell population process and derive the differential equations for the joint generating functions for the number of mutants and their integrals in generating functions and their applications. We obtain first-order moments of the processes of the two-way mutation process in first-order moment structure of X (t) and Y (t) and the second-order moments of a one-way mutation process. In this paper, we obtain the limiting behaviour of the integrals in limiting distributions of X (t) and Y (t).

Keywords: stochastic integrals, single–server queue model, catastrophes, busy period

Procedia PDF Downloads 633
20171 The Investigation of LPG Injector Control Circuit on a Motorcycle

Authors: Bin-Wen Lan, Ying-Xin Chen, Hsueh-Cheng Yang

Abstract:

Liquefied petroleum gas is a fuel that has high octane number and low carbon number. This paper uses MSC-51 controller to investigate the effect of liquefied petroleum gas (LPG) on exhaust emissions for different engine speeds in a single cylinder, four-stroke and spark ignition engine. The results indicate that CO, CO2 and NOX exhaust emissions are lower with the use of LPG compared to the use of unleaded gasoline by using the developed controller. The open-loop in the LPG injection system was controlled by MCS-51 single chip. The results show that if a SI engine is operated with LPG fuel rather than gasoline fuel under the same conditions, significant reduction in exhaust emissions can be achieved. In summary, LPG has positive effects on main exhaust emissions such as CO, CO2 and NOX.

Keywords: LPG, control circuit, emission, MCS-51

Procedia PDF Downloads 490
20170 A Mathematical Equation to Calculate Stock Price of Different Growth Model

Authors: Weiping Liu

Abstract:

This paper presents an equation to calculate stock prices of different growth model. This equation is mathematically derived by using discounted cash flow method. It has the advantages of being very easy to use and very accurate. It can still be used even when the first stage is lengthy. This equation is more generalized because it can be used for all the three popular stock price models. It can be programmed into financial calculator or electronic spreadsheets. In addition, it can be extended to a multistage model. It is more versatile and efficient than the traditional methods.

Keywords: stock price, multistage model, different growth model, discounted cash flow method

Procedia PDF Downloads 394
20169 Investments in Petroleum Industry Abnormally Normal: A Case Study Based on Petroleum and Natural Gas Companies in India

Authors: Radhika Ramanchi

Abstract:

The oil market during 2014-2015 in India with large price fluctuations is very confusing to individual investor. The drop in oil prices supported stocks of some oil marketing companies (OMCs) like Bharat Petroleum Corporation, Hindustan Petroleum Corporation (HPCL) and Indian Oil Corporation etc their shares rose 84.74%, 128.63% and 59.16%, respectively. Lower oil prices, and lower current account, a smaller subsidy burden are the reasons for outperformance. On the other hand, lower crude prices giving downward pressure on upstream companies like Oil and Natural Gas Corp. Ltd (ONGC) and Reliance Petroleum (RIL) Oil India Ltd (OIL). Not having clarity on a subsidy sharing mechanism is the reason for downward trend on these stocks. Shares of ONGC and RIL have underperformed so far in 2015. When the oil price fall profits of the companies will effect, generate less money and may cut their dividends in Long run. In this situation this paper objective is to study investment strategies in oil marketing companies, by applying CAPM and Security Market Line.

Keywords: petrol industry, price fluctuations, sharp single index model, SML, Markowitz model

Procedia PDF Downloads 213
20168 Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: Study of Regeneration Heat Duty

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

A high-pressure carbon dioxide (CO₂) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO₂ concentration, CO₂ loading, reboiler power supply, and regeneration heat duty to choose the most efficient solution in terms of CO₂ removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on the electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that piperazine, in addition to the mixture of piperazine and monoethanolamine (MEA), demands the highest regeneration heat duty compared with other studied single and blended amine solutions, respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of the process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO₂ content in the outlet gas, rich-CO₂ loading, and regeneration heat duty.

Keywords: absorption, amine solutions, aspen HYSYS, CO₂ loading, piperazine, regeneration heat duty

Procedia PDF Downloads 176
20167 Experimental Study on Single Bay RC Frame Designed Using EC8 under In-Plane Cyclic Loading

Authors: N. H. Hamid, M. S. Syaref, M. I. Adiyanto, M. Mohamed

Abstract:

A one-half scale of single-bay two-storey RC frame together with foundation beam and mass concrete block is investigated. Moment resisting RC frame was designed using EC8 by including the provision for seismic loading and detailing of its connection. The objective of the experimental work is to determine seismic behaviour RC frame under in-plane lateral cyclic loading using displacement control method. A double actuator is placed at centre of the mass concrete block at top of frame to represent the seismic load. The percentage drifts are starting from ±0.01% until ±2.25% with increment of ±0.25% drift. The ultimate lateral load of 158.48 kN was recorded at +2.25% drift in pushing and -126.09 kN in pulling direction. From the experimental hysteresis loops, the parameters such as lateral strength capacity, stiffness, ductility and equivalent viscous damping can be obtained. RC frame behaves in the elastic manner followed by inelastic behaviour after reaches the yield limit. The ductility value for this type frame is 4 which lies between the limit 3 and 6. Therefore, it is recommended to build this RC frame for moderate seismic regions under Ductility Class Medium (DCM) such as in Sabah, East Malaysia.

Keywords: single bay, moment resisting RC frame, ductility class medium, inelastic behavior, seismic load

Procedia PDF Downloads 376
20166 Empirical Analytical Modelling of Average Bond Stress and Anchorage of Tensile Bars in Reinforced Concrete

Authors: Maruful H. Mazumder, Raymond I. Gilbert

Abstract:

The design specifications for calculating development and lapped splice lengths of reinforcement in concrete are derived from a conventional empirical modelling approach that correlates experimental test data using a single mathematical equation. This paper describes part of a recently completed experimental research program to assess the effects of different structural parameters on the development length requirements of modern high strength steel reinforcing bars, including the case of lapped splices in large-scale reinforced concrete members. The normalized average bond stresses for the different variations of anchorage lengths are assessed according to the general form of a typical empirical analytical model of bond and anchorage. Improved analytical modelling equations are developed in the paper that better correlate the normalized bond strength parameters with the structural parameters of an empirical model of bond and anchorage.

Keywords: bond stress, development length, lapped splice length, reinforced concrete

Procedia PDF Downloads 426
20165 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 217