Search results for: H₂-optimal model reduction
18316 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis
Authors: Sidi Yang, Haiyi Zhang
Abstract:
Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.Keywords: text mining, Twitter, topic model, sentiment analysis
Procedia PDF Downloads 17918315 Health Burden of Disease Assessment for Minimizing Aflatoxin Exposure in Peanuts
Authors: Min-Pei Ling
Abstract:
Aflatoxin is a fungal secondary metabolite with high toxicity capable of contaminating various types of food crops. It has been identified as a Group 1 human carcinogen by the International Agency for Research on Cancer. Chronic aflatoxin exposure has caused a worldwide public food safety concern. Peanuts and peanut products are the major sources of aflatoxin exposure. Therefore, some reduction interventions have been developed to minimize contamination through the peanut production chain. The purpose of this study is to estimate the efficacy of interventions in reducing the health impact of hepatocellular carcinoma caused by aflatoxin contamination in peanuts. The estimated total disability-adjusted life-years (DALYs) was calculated using FDA-iRISK online software. Six aflatoxin reduction strategies were evaluated, including good agricultural practice (GAP), biocontrol, Purdue Improved Crop Storage packaging, basic processing, ozonolysis, and ultraviolet irradiation. The results indicated that basic processing could prevent huge public health loss of 4,079.7–21,833 total DALYs per year, which accounted for 39.6% of all decreased total DALYs. GAP and biocontrol were both effective strategies in the farm field, while the other three interventions were limited in reducing total DALYs. In conclusion, this study could help farmers, processing plants, and government policymakers to alleviate aflatoxin contamination issues in the peanut production chain.Keywords: aflatoxin, health burden, disability-adjusted life-years, peanuts
Procedia PDF Downloads 13318314 Effect of Corrosion on the Shear Buckling Strength
Authors: Myoung-Jin Lee, Sung-Jin Lee, Young-Kon Park, Jin-Wook Kim, Bo-Kyoung Kim, Song-Hun Chong, Sun-Ii Kim
Abstract:
The ability to resist the shear strength arises mainly from the web panel of steel girders and as such, the shear buckling strength of these girders has been extensively investigated. For example, Blaser’s reported that when buckling occurs, the tension field has an effect after the buckling strength of the steel is reached. The findings of these studies have been applied by AASHTO, AISC, and to the European Code that provides guidelines for designs aimed at preventing shear buckling. Steel girders are susceptible to corrosion resulting from exposure to natural elements such as rainfall, humidity, and temperature. This corrosion leads to a reduction in the size of the web panel section, thereby resulting in a decrease in the shear strength. The decrease in the panel section has a significant effect on the maintenance section of the bridge. However, in most conventional designs, the influence of corrosion is overlooked during the calculation of the shear buckling strength and hence over-design is common. Therefore, in this study, a steel girder with an A/D of 1:1, as well as a 6-mm-, 16-mm-, and 12-mm-thick web panel, flange, and intermediate reinforcing material, respectively, were used. The total length was set to that (3200 mm) of the default model. The effect of corrosion shear buckling was investigated by determining the volume amount of corrosion, shape of the erosion patterns, and the angular change in the tensile field of the shear buckling strength. This study provides the basic data that will enable designs that incorporate values closer (than those used in most conventional designs) to the actual shear buckling strength.Keywords: corrosion, shear buckling strength, steel girder, shear strength
Procedia PDF Downloads 37518313 Analysis on the Need of Engineering Drawing and Feasibility Study on 3D Model Based Engineering Implementation
Authors: Parthasarathy J., Ramshankar C. S.
Abstract:
Engineering drawings these days play an important role in every part of an industry. By and large, Engineering drawings are influential over every phase of the product development process. Traditionally, drawings are used for communication in industry because they are the clearest way to represent the product manufacturing information. Until recently, manufacturing activities were driven by engineering data captured in 2D paper documents or digital representations of those documents. The need of engineering drawing is inevitable. Still Engineering drawings are disadvantageous in re-entry of data throughout manufacturing life cycle. This document based approach is prone to errors and requires costly re-entry of data at every stage in the manufacturing life cycle. So there is a requirement to eliminate Engineering drawings throughout product development process and to implement 3D Model Based Engineering (3D MBE or 3D MBD). Adopting MBD appears to be the next logical step to continue reducing time-to-market and improve product quality. Ideally, by fully applying the MBD concept, the product definition will no longer rely on engineering drawings throughout the product lifecycle. This project addresses the need of Engineering drawing and its influence in various parts of an industry and the need to implement the 3D Model Based Engineering with its advantages and the technical barriers that must be overcome in order to implement 3D Model Based Engineering. This project also addresses the requirements of neutral formats and its realisation in order to implement the digital product definition principles in a light format. In order to prove the concepts of 3D Model Based Engineering, the screw jack body part is also demonstrated. At ZF Windpower Coimbatore Limited, 3D Model Based Definition is implemented to Torque Arm (Machining and Casting), Steel tube, Pinion shaft, Cover, Energy tube.Keywords: engineering drawing, model based engineering MBE, MBD, CAD
Procedia PDF Downloads 43518312 A Bi-Objective Model to Address Simultaneous Formulation of Project Scheduling and Material Ordering
Authors: Babak H. Tabrizi, Seyed Farid Ghaderi
Abstract:
Concurrent planning of project scheduling and material ordering has been increasingly addressed within last decades as an approach to improve the project execution costs. Therefore, we have taken the problem into consideration in this paper, aiming to maximize schedules quality robustness, in addition to minimize the relevant costs. In this regard, a bi-objective mathematical model is developed to formulate the problem. Moreover, it is possible to utilize the all-unit discount for materials purchasing. The problem is then solved by the constraint method, and the Pareto front is obtained for a variety of robustness values. The applicability and efficiency of the proposed model is tested by different numerical instances, finally.Keywords: e-constraint method, material ordering, project management, project scheduling
Procedia PDF Downloads 29518311 Estimation of Soil Moisture at High Resolution through Integration of Optical and Microwave Remote Sensing and Applications in Drought Analyses
Authors: Donglian Sun, Yu Li, Paul Houser, Xiwu Zhan
Abstract:
California experienced severe drought conditions in the past years. In this study, the drought conditions in California are analyzed using soil moisture anomalies derived from integrated optical and microwave satellite observations along with auxiliary land surface data. Based on the U.S. Drought Monitor (USDM) classifications, three typical drought conditions were selected for the analysis: extreme drought conditions in 2007 and 2013, severe drought conditions in 2004 and 2009, and normal conditions in 2005 and 2006. Drought is defined as negative soil moisture anomaly. To estimate soil moisture at high spatial resolutions, three approaches are explored in this study: the universal triangle model that estimates soil moisture from Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST); the basic model that estimates soil moisture under different conditions with auxiliary data like precipitation, soil texture, topography, and surface types; and the refined model that uses accumulated precipitation and its lagging effects. It is found that the basic model shows better agreements with the USDM classifications than the universal triangle model, while the refined model using precipitation accumulated from the previous summer to current time demonstrated the closest agreements with the USDM patterns.Keywords: soil moisture, high resolution, regional drought, analysis and monitoring
Procedia PDF Downloads 13618310 Coverage Probability Analysis of WiMAX Network under Additive White Gaussian Noise and Predicted Empirical Path Loss Model
Authors: Chaudhuri Manoj Kumar Swain, Susmita Das
Abstract:
This paper explores a detailed procedure of predicting a path loss (PL) model and its application in estimating the coverage probability in a WiMAX network. For this a hybrid approach is followed in predicting an empirical PL model of a 2.65 GHz WiMAX network deployed in a suburban environment. Data collection, statistical analysis, and regression analysis are the phases of operations incorporated in this approach and the importance of each of these phases has been discussed properly. The procedure of collecting data such as received signal strength indicator (RSSI) through experimental set up is demonstrated. From the collected data set, empirical PL and RSSI models are predicted with regression technique. Furthermore, with the aid of the predicted PL model, essential parameters such as PL exponent as well as the coverage probability of the network are evaluated. This research work may assist in the process of deployment and optimisation of any cellular network significantly.Keywords: WiMAX, RSSI, path loss, coverage probability, regression analysis
Procedia PDF Downloads 17718309 Causes of Variation Orders in the Egyptian Construction Industry: Time and Cost Impacts
Authors: A. Samer Ezeldin, Jwanda M. El Sarag
Abstract:
Variation orders are of great importance in any construction project. Variation orders are defined as any change in the scope of works of a project that can be an addition omission, or even modification. This paper investigates the variation orders that occur during construction projects in Egypt. The literature review represents a comparison of causes of variation orders among Egypt, Tanzania, Nigeria, Malaysia and the United Kingdom. A classification of occurrence of variation orders due to owner related factors, consultant related factors and other factors are signified in the literature review. These classified events that lead to variation orders were introduced in a survey with 19 events to observe their frequency of occurrence, and their time and cost impacts. The survey data was obtained from 87 participants that included clients, consultants, and contractors and a database of 42 scenarios was created. A model is then developed to help assist project managers in predicting the frequency of variations and account for a budget for any additional costs and minimize any delays that can take place. Two experts with more than 25 years of experience were given the model to verify that the model was working effectively. The model was then validated on a residential compound that was completed in July 2016 to prove that the model actually produces acceptable results.Keywords: construction, cost impact, Egypt, time impact, variation orders
Procedia PDF Downloads 18318308 Analysis of a Coupled Hydro-Sedimentological Numerical Model for the Western Tombolo of Giens
Authors: Yves Lacroix, Van Van Than, Didier Léandri, Pierre Liardet
Abstract:
The western Tombolo of the Giens peninsula in southern France, known as Almanarre beach, is subject to coastal erosion. We are trying to use computer simulation in order to propose solutions to stop this erosion. Our aim was first to determine the main factors for this erosion and successfully apply a coupled hydro-sedimentological numerical model based on observations and measurements that have been performed on the site for decades. We have gathered all available information and data about waves, winds, currents, tides, bathymetry, coastal line, and sediments concerning the site. These have been divided into two sets: one devoted to calibrating a numerical model using Mike 21 software, the other to serve as a reference in order to numerically compare the present situation to what it could be if we implemented different types of underwater constructions. This paper presents the first part of the study: selecting and melting different sources into a coherent data basis, identifying the main erosion factors, and calibrating the coupled software model against the selected reference period. Our results bring calibration of the numerical model with good fitting coefficients. They also show that the winter South-Western storm events conjugated to depressive weather conditions constitute a major factor of erosion, mainly due to wave impact in the northern part of the Almanarre beach. Together, current and wind impact is shown negligible.Keywords: Almanarre beach, coastal erosion, hydro-sedimentological, numerical model
Procedia PDF Downloads 37618307 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models
Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti
Abstract:
This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm
Procedia PDF Downloads 41218306 Giftedness Cloud Model: A Psychological and Ecological Vision of Giftedness Concept
Authors: Rimeyah H. S. Almutairi, Alaa Eldin A. Ayoub
Abstract:
The aim of this study was to identify empirical and theoretical studies that explored giftedness theories and identification. In order to assess and synthesize the mechanisms, outcomes, and impacts of gifted identification models. Thus, we sought to provide an evidence-informed answer to how does current giftedness theories work and effectiveness. In order to develop a model that incorporates the advantages of existing models and avoids their disadvantages as much as possible. We conducted a systematic literature review (SLR). The disciplined analysis resulted in a final sample consisting of 30 appropriate searches. The results indicated that: (a) there is no uniform and consistent definition of Giftedness; (b) researchers are using several non-consistent criteria to detect gifted, and (d) The detection of talent is largely limited to early ages, and there is obvious neglect of adults. This study contributes to the development of Giftedness Cloud Model (GCM) which defined as a model that attempts to interpretation giftedness within an interactive psychological and ecological framework. GCM aims to help a talented to reach giftedness core and manifestation talent in creative productivity or invention. Besides that, GCM suggests classifying giftedness into four levels of mastery, excellence, creative productivity, and manifestation. In addition, GCM presents an idea to distinguish between talent and giftedness.Keywords: giftedness cloud model, talent, systematic literature review, giftedness concept
Procedia PDF Downloads 16718305 Finite Element Simulation of RC Exterior Beam-Column Joints Using Damage Plasticity Model
Authors: A. M. Halahla, M. H. Baluch, M. K. Rahman, A. H. Al-Gadhib, M. N. Akhtar
Abstract:
In the present study, 3D simulation of a typical exterior (RC) beam–column joint (BCJ) strengthened with carbon fiber-reinforced plastic (CFRP) sheet are carried out. Numerical investigations are performed using a nonlinear finite element ( FE) analysis by incorporating damage plasticity model (CDP), for material behaviour the concrete response in compression, tension softening were used, linear plastic with isotropic hardening for reinforcing steel, and linear elastic lamina material model for CFRP sheets using the commercial FE software ABAQUS. The numerical models developed in the present study are validated with the results obtained from the experiment under monotonic loading using the hydraulic Jack in displacement control mode. The experimental program includes casting of deficient BCJ loaded to failure load for both un-strengthened and strengthened BCJ. The failure mode, and deformation response of CFRP strengthened and un-strengthened joints and propagation of damage in the components of BCJ are discussed. Finite element simulations are compared with the experimental result and are noted to yield reasonable comparisons. The damage plasticity model was able to capture with good accuracy of the ultimate load and the mode of failure in the beam column joint.Keywords: reinforced concrete, exterior beam-column joints, concrete damage plasticity model, computational simulation, 3-D finite element model
Procedia PDF Downloads 38318304 An Interlock Model of Friction and Superlubricity
Authors: Azadeh Malekan, Shahin Rouhani
Abstract:
Superlubricity is a phenomenon where two surfaces in contact show negligible friction;this may be because the asperities of the two surfaces do not interlock. Two rough surfaces, when pressed against each other, can get into a formation where the summits of asperities of one surface lock into the valleys of the other surface. The amount of interlock depends on the geometry of the two surfaces. We suggest the friction force may then be proportional to the amount of interlock; this explains Superlubricity as the situation where there is little interlock. Then the friction force will be directly proportional to the normal force as it is related to the work necessary to lift the upper surface in order to clear the interlock. To investigate this model, we simulate the contact of two surfaces. In order to validate our model, we first investigate Amontons‘ law. Assuming that asperities retain deformations in the time scale while the top asperity moves across the lattice spacing Amonton’s law is observed. Structural superlubricity is examined by the hypothesis that surfaces are very rigid and there is no deformation in asperities. This may happen at small normal forces. When two identical surfaces come into contact, rotating the top surface we observe a peak in friction force near the angle of orientation where the two surfaces can interlock.Keywords: friction, amonton`s law, superlubricity, contact model
Procedia PDF Downloads 14718303 Management of Pressure Ulcer with a Locally Constructed Negative Pressure Device (NPD) in Traumatic Paraplegia Patients: A Randomized Controlled Clinical Trial
Authors: Mukesh K. Dwivedi, Rajeshwar N. Srivastava, Amit K. Bhagat, Saloni Raj
Abstract:
Introduction: Management of Pressure Ulcer (PU) is an ongoing clinical challenge particularly in traumatic paraplegia patients in developing countries where socio economic conditions often dictate treatment modalities. When negative pressure wound therapy (NPWT) was introduced, there were a series of devices (V.A.C., KCI, San Antonio, TX) manufactured. These devices for NPWT are costly and hard to afford by patients in developing countries like India. Considering this limitation, this study was planned to design an RCT to compare NPWT by an indigenized locally constructed NPD and conventional gauze dressing for the treatment of PU. Material and Methods: This RCT (CTRI/2014/09/0050) was conducted in the Department of Orthopaedic Surgery at King George’s Medical University (KGMU), India. Thirty-four (34) subjects of traumatic paraplegia having PU of stage 3 or 4, were enrolled and randomized in two treatment groups (NPWT Group & Conventional dressing group). The outcome measures of this study were surface area and depth of PU, exudates, microorganisms and matrix metalloproteinase-8 (MMP-8) during 0 to 9 weeks follow-ups. Levels of MMP-8 were analyzed in the tissues of PU at week 0, 3, 6 and week 9 by Enzyme Linked Immuno Sorbent Assay (ELISA). Results: Significantly reduced length of PU in NPWT group was observed at week 6 (p=0.04) which further reduced at week 9 (p=0.001) as compared to conventionally treated group. Similarly significant reduction of width and depth of PU was observed in NPWT at week 9 (p<0.05). The exudate became significantly (p=0.001) lower in NPWT group as compared with conventionally treated group from 6th to 9th week. Clearance and conversion of slough into red granulation tissue was significantly higher in NPWT group (p=0.001). At week 9, the wound culture was negative in all the subjects of NPWT group, while it was positive in 10 (41⋅6%) subjects of conventional group. Significantly lower level of MMP-8 was observed in subjects of NPWT group at week 6 (0.006**), and continually more reduction was observed at week 9 (<0.0001**) as compared to the conventional group. Conclusion: NPWT by locally constructed NPD is better wound care procedure for management of PU. Our device gave similar results as commercially available devices. Reduction of level of MMP-8 and increased rate of healing was achieved by negative pressure wound therapy (NPWT) as compared to conventional dressing.Keywords: NPWT, NPD, MMP8, ELISA
Procedia PDF Downloads 25318302 Core-Shell Nanofibers for Prevention of Postsurgical Adhesion
Authors: Jyh-Ping Chen, Chia-Lin Sheu
Abstract:
In this study, we propose to use electrospinning to fabricate porous nanofibrous membranes as postsurgical anti-adhesion barriers and to improve the properties of current post-surgical anti-adhesion products. We propose to combine FDA-approved biomaterials with anti-adhesion properties, polycaprolactone (PCL), polyethylene glycol (PEG), hyaluronic acid (HA) with silver nanoparticles (Ag) and ibuprofen (IBU), to produce anti-adhesion barrier nanofibrous membranes. For this purpose, PEG/PCL/Ag/HA/IBU core-shell nanofibers were prepared. The shell layer contains PEG + PCL to provide mechanical supports and Ag was added to the outer PEG-PCL shell layer during electrospinning to endow the nanofibrous membrane with anti-bacterial properties. The core contains HA to exert anti-adhesion and IBU to exert anti-inflammation effects, respectively. The nanofibrous structure of the membranes can reduce cell penetration while allowing nutrient and waste transports to prevent postsurgical adhesion. Nanofibers with different core/shell thickness ratio were prepared. The nanofibrous membranes were first characterized for their physico-chemical properties in detail, followed by in vitro cell culture studies for cell attachment and proliferation. The HA released from the core region showed extended release up to 21 days for prolonged anti-adhesion effects. The attachment of adhesion-forming fibroblasts is reduced using the nanofibrous membrane from DNA assays and confocal microscopic observation of adhesion protein vinculin expression. The Ag released from the shell showed burst release to prevent E Coli and S. aureus infection immediately and prevent bacterial resistance to Ag. Minimum cytotoxicity was observed from Ag and IBU when fibroblasts were culture with the extraction medium of the nanofibrous membranes. The peritendinous anti-adhesion model in rabbits and the peritoneal anti-adhesion model in rats were used to test the efficacy of the anti-adhesion barriers as determined by gross observation, histology, and biomechanical tests. Within all membranes, the PEG/PCL/Ag/HA/IBU core-shell nanofibers showed the best reduction in cell attachment and proliferation when tested with fibroblasts in vitro. The PEG/PCL/Ag/HA/IBU nanofibrous membranes also showed significant improvement in preventing both peritendinous and peritoneal adhesions when compared with other groups and a commercial adhesion barrier film.Keywords: anti-adhesion, electrospinning, hyaluronic acid, ibuprofen, nanofibers
Procedia PDF Downloads 18118301 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter
Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh
Abstract:
Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions
Procedia PDF Downloads 13018300 Simultaneous versus Sequential Model in Foreign Entry
Authors: Patricia Heredia, Isabel Saz, Marta Fernández
Abstract:
This article proposes that the decision regarding exporting and the choice of export channel are nested and non-independent decisions. We assume that firms make two sequential decisions before arriving at their final choice: the decision to access foreign markets and the decision about the type of channel. This hierarchical perspective of the choices involved in the process is appealing for two reasons. First, it supports the idea that people have a limited analytical capacity. Managers often break down a complex decision into a hierarchical process because this makes it more manageable. Secondly, it recognizes that important differences exist between entry modes. In light of the above, the objective of this study is to test different entry mode choice processes: independent decisions and nested and non-independent decisions. To do this, the methodology estimates and compares the following two models: (i) a simultaneous single-stage model with three entry mode choices (using a multinomial logit model); ii) a two-stage model with the export decision preceding the channel decision using a sequential logit model. The study uses resource-based factors in determining these decision processes concerning internationalization and the study carries out empirical analysis using a DOC Rioja sample of 177 firms.Using the Akaike and Schwarz Information Criteria, the empirical evidence supports the existence of a nested structure, where the decision about exporting precedes the export mode decision. The implications and contributions of the findings are discussed.Keywords: sequential logit model, two-stage choice process, export mode, wine industry
Procedia PDF Downloads 3018299 Hydraulic Analysis of Irrigation Approach Channel Using HEC-RAS Model
Authors: Muluegziabher Semagne Mekonnen
Abstract:
This study was intended to show the irrigation water requirements and evaluation of canal hydraulics steady state conditions to improve on scheme performance of the Meki-Ziway irrigation project. The methodology used was the CROPWAT 8.0 model to estimate the irrigation water requirements of five major crops irrigated in the study area. The results showed that for the whole existing and potential irrigation development area of 2000 ha and 2599 ha, crop water requirements were 3,339,200 and 4,339,090.4 m³, respectively. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. In this study Hydraulic Analysis of Irrigation Canals Using HEC-RAS Model was conducted in Meki-Ziway Irrigation Scheme. The HEC-RAS model was tested in terms of error estimation and used to determine canal capacity potential.Keywords: HEC-RAS, irrigation, hydraulic. canal reach, capacity
Procedia PDF Downloads 6018298 Optimal Hedging of a Portfolio of European Options in an Extended Binomial Model under Proportional Transaction Costs
Authors: Norm Josephy, Lucy Kimball, Victoria Steblovskaya
Abstract:
Hedging of a portfolio of European options under proportional transaction costs is considered. Our discrete time financial market model extends the binomial market model with transaction costs to the case where the underlying stock price ratios are distributed over a bounded interval rather than over a two-point set. An optimal hedging strategy is chosen from a set of admissible non-self-financing hedging strategies. Our approach to optimal hedging of a portfolio of options is based on theoretical foundation that includes determination of a no-arbitrage option price interval as well as on properties of the non-self-financing strategies and their residuals. A computational algorithm for optimizing an investor relevant criterion over the set of admissible non-self-financing hedging strategies is developed. Applicability of our approach is demonstrated using both simulated data and real market data.Keywords: extended binomial model, non-self-financing hedging, optimization, proportional transaction costs
Procedia PDF Downloads 25218297 Proposing a Failure Criterion for Cohesionless Media Considering Cyclic Fabric Anisotropy
Authors: Ali Noorzad, Ehsan Badakhshan, Shima Zameni
Abstract:
The present paper is focused on a generalized failure criterion for geomaterials with cross-anisotropy. The cyclic behavior of granular material primarily depends on the nature and arrangement of constituent particles, particle size, and shape that affect fabric anisotropy. To account for the influence of loading directions on strength variations, an anisotropic variable in terms of the invariants of the stress tensor and fabric into the failure criterion is proposed. In an extension to original CANAsand constitutive model two concepts namely critical state and compact state play paramount roles as all of the moduli and coefficients are related to these states. The applicability of the present model is evaluated through comparisons between the predicted and the measured results. All simulations have demonstrated that the proposed constitutive model is capable of modeling the cyclic behavior of sand with inherent anisotropy.Keywords: fabric, cohesionless media, cyclic loading, critical state, compact state, CANAsand constitutive model
Procedia PDF Downloads 21918296 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 17918295 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 17718294 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil
Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira
Abstract:
The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies
Procedia PDF Downloads 29818293 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object
Procedia PDF Downloads 23318292 Leverage Effect for Volatility with Generalized Laplace Error
Authors: Farrukh Javed, Krzysztof Podgórski
Abstract:
We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models
Procedia PDF Downloads 38618291 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 27818290 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model
Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok
Abstract:
The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.Keywords: functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity
Procedia PDF Downloads 15118289 From the Sharing Economy to Social Manufacturing: Analyzing Collaborative Service Networks in the Manufacturing Domain
Authors: Babak Mohajeri
Abstract:
In recent years, the conventional business model of ownership has been changed towards accessibility in a variety of markets. Two trends can be observed in the evolution of this rental-like business model. Firstly, the technological development that enables the emergence of new business models. These new business models increasingly become agile and flexible. For example Spotify, an online music stream company provides consumers access to over millions of music tracks, conveniently through the smartphone, tablet or computer. Similarly, Car2Go, the car sharing company accesses its members with flexible and nearby sharing cars. The second trend is the increasing communication and connections via social networks. This trend enables a shift to peer-to-peer accessibility based business models. Conventionally, companies provide access for their customers to own companies products or services. In peer-to-peer model, nonetheless, companies facilitate access and connection across their customers to use other customers owned property or skills, competencies or services .The is so-called the sharing economy business model. The aim of this study is to investigate into a new and emerging type of the sharing economy model in which role of customers and service providers may dramatically change. This new model is called Collaborative Service Networks. We propose a mechanism for Collaborative Service Networks business model. Uber and Airbnb, two successful growing companies, have been selected for our case studies and their business models are analyzed. Finally, we study the emergence of the collaborative service networks in the manufacturing domain. Our finding results to a new manufacturing paradigm called social manufacturing.Keywords: sharing economy, collaborative service networks, social manufacturing, manufacturing development
Procedia PDF Downloads 31718288 Volatility Model with Markov Regime Switching to Forecast Baht/USD
Authors: Nop Sopipan
Abstract:
In this paper, we forecast the volatility of Baht/USDs using Markov Regime Switching GARCH (MRS-GARCH) models. These models allow volatility to have different dynamics according to unobserved regime variables. The main purpose of this paper is to find out whether MRS-GARCH models are an improvement on the GARCH type models in terms of modeling and forecasting Baht/USD volatility. The MRS-GARCH is the best performance model for Baht/USD volatility in short term but the GARCH model is best perform for long term.Keywords: volatility, Markov Regime Switching, forecasting, Baht/USD
Procedia PDF Downloads 30218287 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 158