Search results for: parallel series combinations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4351

Search results for: parallel series combinations

3961 Modelling Structural Breaks in Stock Price Time Series Using Stochastic Differential Equations

Authors: Daniil Karzanov

Abstract:

This paper studies the effect of quarterly earnings reports on the stock price. The profitability of the stock is modeled by geometric Brownian diffusion and the Constant Elasticity of Variance model. We fit several variations of stochastic differential equations to the pre-and after-report period using the Maximum Likelihood Estimation and Grid Search of parameters method. By examining the change in the model parameters after reports’ publication, the study reveals that the reports have enough evidence to be a structural breakpoint, meaning that all the forecast models exploited are not applicable for forecasting and should be refitted shortly.

Keywords: stock market, earnings reports, financial time series, structural breaks, stochastic differential equations

Procedia PDF Downloads 192
3960 Development of Antimicrobial Properties Nutraceuticals: Gummy Candies with Addition of Bovine Colostrum, Essential Oils and Probiotics

Authors: E. Bartkiene, M. Ruzauskas, V. Lele, P. Zavistanaviciute, J. Bernatoniene, V. Jakstas, L. Ivanauskas, D. Zadeike, D. Klupsaite, P. Viskelis, J. Bendoraitiene, V. Navikaite-Snipaitiene, G. Juodeikiene

Abstract:

In this study, antimicrobial nutraceuticals; gummy candies (GC) from bovine colostrum (BC), essential oils (EOs), probiotic lactic acid bacteria (PLAB), and their combinations, were developed. For antimicrobial GC preparation, heteropolysaccharide (agar) was used. The antimicrobial properties of EOs (Eugenia caryophyllata, Thymus vulgaris, Citrus reticulata L., Citrus paradisi L.), BC, L. paracasei LUHS244, L. plantarum LUHS135, and their combinations against pathogenic bacteria strains (Streptococcus mutans, Enterococcus faecalis, Staphylococcus aureus, Salmonella enterica, Escherichia coli, Proteus mirabilis, and Pseudomonas aeruginosa) were evaluated. The highest antimicrobial properties by EO’s (Eugenia caryophyllata and Thymus vulgaris) were established. The optimal ingredients composition for antimicrobial GC preparation was established, which incorporate the BC fermented with L. paracasei LUHS244 in combination with Thymus vulgaris or Eugenia caryophyllata. These ingredients showed high inhibition properties of all tested pathogenic strains (except Pseudomonas aeruginosa). Antimicrobial GC formula consisting of thyme EO (up to 0.2%) and fermented BC (up to 3%), and for taste masking, mandarin or grapefruit EOs (up to 0.2%) was used. Developed GC high overall acceptability and antimicrobial properties, thus, antimicrobial GC could be a preferred form of nutraceuticals. This study was fulfilled with the support of the LSMU-KTU joint project.

Keywords: antimicrobial activity, bovine colostrum, essential oil, gummy candy, probiotic

Procedia PDF Downloads 162
3959 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 311
3958 Environmental Effects on Coconut Coir Fiber Epoxy Composites Having TiO₂ as Filler

Authors: Srikanth Korla, Mahesh Sharnangat

Abstract:

Composite materials are being widely used in Aerospace, Naval, Defence and other branches of engineering applications. Studies on natural fibers is another emerging research area as they are available in abundance, and also due to their eco-friendly in nature. India being one of the major producer of coir, there is always a scope to study the possibilities of exploring coir as reinforment, and with different combinations of other elements of the composite. In present investigation effort is made to utilize properties possessed by natural fiber and make them enable with polymer/epoxy resin. In natural fiber coconut coir is used as reinforcement fiber in epoxy resin with varying weight percentages of fiber and filler material. Titanium dioxide powder (TiO2) is used as filler material with varying weight percentage including 0%, 2% and 4% are considered for experimentation. Environmental effects on the performance of the composite plate are also studied and presented in this project work; Moisture absorption test for composite specimens is conducted using different solvents including Kerosene, Mineral Water and Saline Water, and its absorption capacity is evaluated. Analysis is carried out in different combinations of Coir as fiber and TiO2 as filler material, and the best suitable composite material considering the strength and environmental effects is identified in this work. Therefore, the significant combination of the composite material is with following composition: 2% TiO2 powder 15% of coir fibre and 83% epoxy, under unique mechanical and environmental conditions considered in the work.

Keywords: composite materials, moisture test, filler material, natural fibre composites

Procedia PDF Downloads 191
3957 Solution of Some Boundary Value Problems of the Generalized Theory of Thermo-Piezoelectricity

Authors: Manana Chumburidze

Abstract:

We have considered a non-classical model of dynamical problems for a conjugated system of differential equations arising in thermo-piezoelectricity, which was formulated by Toupin – Mindlin. The basic concepts and the general theory of solvability for isotropic homogeneous elastic media is considered. They are worked by using the methods the Laplace integral transform, potential method and singular integral equations. Approximate solutions of mixed boundary value problems for finite domain, bounded by the some closed surface are constructed. They are solved in explicitly by using the generalized Fourier's series method.

Keywords: thermo-piezoelectricity, boundary value problems, Fourier's series, isotropic homogeneous elastic media

Procedia PDF Downloads 452
3956 Mirrors and Lenses: Multiple Views on Recognition in Holocaust Literature

Authors: Kirsten A. Bartels

Abstract:

There are a number of similarities between survivor literature and Holocaust fiction for children and young adults. The paper explores three facets of the parallels of recognition found specifically between Livia Bitton-Jackson’s memoir of her experience during the Holocaust as an inmate in Auschwitz, I Have Lived a Thousand Years (1999) and Morris Glietzman series of Holocaust fiction. While Bitton-Jackson reflects on her past and Glietzman designs a fictive character, both are judicious with what they are willing to impart, only providing information about their appearance or themselves when it impacts others or when it serves a necessary purpose to the story. Another similarity lies in another critical aspect of many works of Holocaust literature – the idea of being ‘representatively Jewish’. The authors come to this idea from different angles, perhaps best explained as the difference between showing and telling, for Bitton-Jackson provides personal details, and Gleitzman constructed Felix arguably with this idea in mind. Interwoven through their journeys is a shift in perspectives on being recognized -- from wanting to be seen as individuals to being seen as Jew. With this, being Jewish takes on different meaning, both youths struggle with being labeled as something they do not truly understand, and may have not truly identified with, from a label, to a death warrant. With survivor literature viewed as the most credible and worthwhile type of Holocaust literature and Holocaust fiction is often seen as the least (with children’s and young-adult being the lowest form) the similarities in approaches to telling the stories may go overlooked or be undervalued. This paper serves as an exploration in the some of parallel messages shared between the two.

Keywords: holocaust fiction, Holocaust literature, representatively Jewish, survivor literature

Procedia PDF Downloads 148
3955 A Quasi Z-Source Based Full Bridge Isolated DC-DC Converter as a Power Module for PV System Connected to HVDC Grid

Authors: Xinke Huang, Huan Wang, Lidong Guo, Changbin Ju, Runbiao Liu, Guoen Cao, Yibo Wang, Honghua Xu

Abstract:

Grid connected photovoltaic (PV) power system is to be developed in the direction of large-scale, clustering. Large-scale PV generation systems connected to HVDC grid have many advantages compared to its counterpart of AC grid, and DC connection is the tendency. DC/DC converter as the most important device in the system, has become one of the hot spots recently. The paper proposes a Quasi Z-Source(QZS) based Boost Full Bridge Isolated DC/DC Converter(BFBIC) topology as a basis power module and combination through input parallel output series(IPOS) method to improve power capacity and output voltage to match with the HVDC grid. The topology has both traditional voltage source and current source advantages, it permit the H-bridge short through and open circuit, which adopt utility duty cycle control and achieved input current and output voltage balancing through input current sharing control strategy. A ±10kV/200kW system model is built in MATLAB/SIMULINK to verify the proposed topology and control strategy.

Keywords: PV Generation System, Cascaded DC/DC converter, HVDC, Quasi Z Source Converter

Procedia PDF Downloads 380
3954 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 124
3953 Leadership and Whether It Stems from Innate Abilities or from Situation

Authors: Salwa Abdelbaki

Abstract:

This research investigated how leaders develop, asking whether they have been leaders due to their innate abilities or they gain leadership characteristics through interactions based on requirements of a situation. If the first is true, then a leader should be successful in any situation. Otherwise, a leader may succeed only in a specific situation. A series of experiments were carried out on three groups including of males and females. First; a group of 148 students with different specializations had to select a leader. Another group of 51 students had to recall their previous experiences and their knowledge of each other to identify who were leaders in different situations. Then a series of analytic tools were applied to the identified leaders and to the whole groups to find out how leaders were developed. A group of 40 young children was also experimented with to find young leaders among them and to analyze their characteristics.

Keywords: leadership, innate characteristics, situation, leadership theories

Procedia PDF Downloads 273
3952 Epistemic Uncertainty Analysis of Queue with Vacations

Authors: Baya Takhedmit, Karim Abbas, Sofiane Ouazine

Abstract:

The vacations queues are often employed to model many real situations such as computer systems, communication networks, manufacturing and production systems, transportation systems and so forth. These queueing models are solved at fixed parameters values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). In this paper, we consider the M/G/1/N queue with server vacation and exhaustive discipline where we assume that the vacation parameter values have uncertainty. We use the Taylor series expansions approach to estimate the expectation and variance of model output, due to epistemic uncertainties in the model input parameters.

Keywords: epistemic uncertainty, M/G/1/N queue with vacations, non-parametric sensitivity analysis, Taylor series expansion

Procedia PDF Downloads 421
3951 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 132
3950 Geochemical Characteristics of Aromatic Hydrocarbons in the Crude Oils from the Chepaizi Area, Junggar Basin, China

Authors: Luofu Liu, Fei Xiao Jr., Fei Xiao

Abstract:

Through the analysis technology of gas chromatography-mass spectrometry (GC-MS), the composition and distribution characteristics of aromatic hydrocarbons in the Chepaizi area of the Junggar Basin were analyzed in detail. Based on that, the biological input, maturity of crude oils and sedimentary environment of the corresponding source rocks were determined and the origin types of crude oils were divided. The results show that there are three types of crude oils in the study area including Type I, Type II and Type III oils. The crude oils from the 1st member of the Neogene Shawan Formation are the Type I oils; the crude oils from the 2nd member of the Neogene Shawan Formation are the Type II oils; the crude oils from the Cretaceous Qingshuihe and Jurassic Badaowan Formations are the Type III oils. For the Type I oils, they show a single model in the late retention time of the chromatogram of total aromatic hydrocarbons. The content of triaromatic steroid series is high, and the content of dibenzofuran is low. Maturity parameters related to alkyl naphthalene, methylphenanthrene and alkyl dibenzothiophene all indicate low maturity for the Type I oils. For the Type II oils, they have also a single model in the early retention time of the chromatogram of total aromatic hydrocarbons. The content of naphthalene and phenanthrene series is high, and the content of dibenzofuran is medium. The content of polycyclic aromatic hydrocarbon representing the terrestrial organic matter is high. The aromatic maturity parameters indicate high maturity for the Type II oils. For the Type III oils, they have a bi-model in the chromatogram of total aromatic hydrocarbons. The contents of naphthalene series, phenanthrene series, and dibenzofuran series are high. The aromatic maturity parameters indicate medium maturity for the Type III oils. The correlation results of triaromatic steroid series fingerprint show that the Type I and Type III oils have similar source and are both from the Permian Wuerhe source rocks. Because of the strong biodegradation and mixing from other source, the Type I oils are very different from the Type III oils in aromatic hydrocarbon characteristics. The Type II oils have the typical characteristics of terrestrial organic matter input under oxidative environment, and are the coal oil mainly generated by the mature Jurassic coal measure source rocks. However, the overprinting effect from the low maturity Cretaceous source rocks changed the original distribution characteristics of aromatic hydrocarbons to some degree.

Keywords: oil source, geochemistry, aromatic hydrocarbons, crude oils, chepaizi area, Junggar Basin

Procedia PDF Downloads 346
3949 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model

Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh

Abstract:

Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).

Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river

Procedia PDF Downloads 280
3948 Chaotic Analysis of Acid Rains with Times Series of pH Degree, Nitrate and Sulphate Concentration on Wet Samples

Authors: Aysegul Sener, Gonca Tuncel Memis, Mirac Kamislioglu

Abstract:

Chaos theory is one of the new paradigms of science since the last century. After determining chaos in the weather systems by Edward Lorenz the popularity of the theory was increased. Chaos is observed in many natural systems and studies continue to defect chaos to other natural systems. Acid rain is one of the environmental problems that have negative effects on environment and acid rains values are monitored continuously. In this study, we aim that analyze the chaotic behavior of acid rains in Turkey with the chaotic defecting approaches. The data of pH degree of rain waters, concentration of sulfate and nitrate data of wet rain water samples in the rain collecting stations which are located in different regions of Turkey are provided by Turkish State Meteorology Service. Lyapunov exponents, reconstruction of the phase space, power spectrums are used in this study to determine and predict the chaotic behaviors of acid rains. As a result of the analysis it is found that acid rain time series have positive Lyapunov exponents and wide power spectrums and chaotic behavior is observed in the acid rain time series.

Keywords: acid rains, chaos, chaotic analysis, Lypapunov exponents

Procedia PDF Downloads 134
3947 Multi-Scale Modelling of Thermal Wrinkling of Thin Membranes

Authors: Salim Belouettar, Kodjo Attipou

Abstract:

The thermal wrinkling behavior of thin membranes is investigated. The Fourier double scale series are used to deduce the macroscopic membrane wrinkling equations. The obtained equations account for the global and local wrinkling modes. Numerical examples are conducted to assess the validity of the approach developed. Compared to the finite element full model, the present model needs only few degrees of freedom to recover accurately the bifurcation curves and wrinkling paths. Different parameters such as membrane’s aspect ratio, wave number, pre-stressed membranes are discussed from a numerical point of view and the properties of the wrinkles (critical load, wavelength, size and location) are presented.

Keywords: wrinkling, thermal stresses, Fourier series, thin membranes

Procedia PDF Downloads 377
3946 Primary Analysis of a Randomized Controlled Trial of Topical Analgesia Post Haemorrhoidectomy

Authors: James Jin, Weisi Xia, Runzhe Gao, Alain Vandal, Darren Svirkis, Andrew Hill

Abstract:

Background: Post-haemorrhoidectomy pain is concerned by patients/clinicians. Minimizing the postoperation pain is highly interested clinically. Combinations of topical cream targeting three hypothesised post-haemorrhoidectomy pain mechanisms were developed and their effectiveness were evaluated. Specifically, a multi-centred double-blinded randomized clinical trial (RCT) was conducted in adults undergoing excisional haemorrhoidectomy. The primary analysis was conveyed on the data collected to evaluate the effectiveness of the combinations of topical cream targeting three hypothesized pain mechanisms after the operations. Methods: 192 patients were randomly allocated to 4 arms (each arm has 48 patients), and each arm was provided with pain cream 10% metronidazole (M), M and 2% diltiazem (MD), M with 4% lidocaine (ML), or MDL, respectively. Patients were instructed to apply topical treatments three times a day for 7 days, and record outcomes for 14 days after the operations. The primary outcome was VAS pain on day 4. Covariates and models were selected in the blind review stage. Multiple imputations were applied for the missingness. LMER, GLMER models together with natural splines were applied. Sandwich estimators and Wald statistics were used. P-values < 0.05 were considered as significant. Conclusions: The addition of topical lidocaine or diltiazem to metronidazole does not add any benefit. ML had significantly better pain and recovery scores than combination MDL. Multimodal topical analgesia with ML after haemorrhoidectomy could be considered for further evaluation. Further trials considering only 3 arms (M, ML, MD) might be worth exploring.

Keywords: RCT, primary analysis, multiple imputation, pain scores, haemorrhoidectomy, analgesia, lmer

Procedia PDF Downloads 99
3945 Thermal Fatigue Behavior of 400 Series Ferritic Stainless Steels

Authors: Seok Hong Min, Tae Kwon Ha

Abstract:

In this study, thermal fatigue properties of 400 series ferritic stainless steels have been evaluated in the temperature ranges of 200-800oC and 200-900oC. Systematic methods for control of temperatures within the predetermined range and measurement of load applied to specimens as a function of temperature during thermal cycles have been established. Thermal fatigue tests were conducted under fully constrained condition, where both ends of specimens were completely fixed. It has been revealed that load relaxation behavior at the temperatures of thermal cycle was closely related with the thermal fatigue property. Thermal fatigue resistance of 430J1L stainless steel is found to be superior to the other steels.

Keywords: ferritic stainless steel, automotive exhaust, thermal fatigue, microstructure, load relaxation

Procedia PDF Downloads 329
3944 Filtration Efficacy of Reusable Full-Face Snorkel Masks for Personal Protective Equipment

Authors: Adrian Kong, William Chang, Rolando Valdes, Alec Rodriguez, Roberto Miki

Abstract:

The Pneumask consists of a custom snorkel-specific adapter that attaches a snorkel-port of the mask to a 3D-printed filter. This full-face snorkel mask was designed for use as personal protective equipment (PPE) during the COVID-19 pandemic when there was a widespread shortage of PPE for medical personnel. Various clinical validation tests have been conducted, including the sealing capability of the mask, filter performance, CO2 buildup, and clinical usability. However, data regarding the filter efficiencies of Pneumask and multiple filter types have not been determined. Using an experimental system, we evaluated the filtration efficiency across various masks and filters during inhalation. Eighteen combinations of respirator models (5 P100 FFRs, 4 Dolfino Masks) and filters (2091, 7093, 7093CN, BB50T) were evaluated for their exposure to airborne particles sized 0.3 - 10.0 microns using an electronic airborne particle counter. All respirator model combinations provided similar performance levels for 1.0-micron, 3.0-micron, 5.0-micron, 10.0-microns, with the greatest differences in the 0.3-micron and 0.5-micron range. All models provided expected performances against all particle sizes, with Class P100 respirators providing the highest performance levels across all particle size ranges. In conclusion, the modified snorkel mask has the potential to protect providers who care for patients with COVID-19 from increased airborne particle exposure.

Keywords: COVID-19, PPE, mask, filtration, efficiency

Procedia PDF Downloads 154
3943 Parametric Models of Facade Designs of High-Rise Residential Buildings

Authors: Yuchen Sharon Sung, Yingjui Tseng

Abstract:

High-rise residential buildings have become the most mainstream housing pattern in the world’s metropolises under the current trend of urbanization. The facades of high-rise buildings are essential elements of the urban landscape. The skins of these facades are important media between the interior and exterior of high- rise buildings. It not only connects between users and environments, but also plays an important functional and aesthetic role. This research involves a study of skins of high-rise residential buildings using the methodology of shape grammar to find out the rules which determine the combinations of the facade patterns and analyze the patterns’ parameters using software Grasshopper. We chose a number of facades of high-rise residential buildings as source to discover the underlying rules and concepts of the generation of facade skins. This research also provides the rules that influence the composition of facade skins. The items of the facade skins, such as windows, balconies, walls, sun visors and metal grilles are treated as elements in the system of facade skins. The compositions of these elements will be categorized and described by logical rules; and the types of high-rise building facade skins will be modelled by Grasshopper. Then a variety of analyzed patterns can also be applied on other facade skins through this parametric mechanism. Using these patterns established in the models, researchers can analyze each single item to do more detail tests and architects can apply each of these items to construct their facades for other buildings through various combinations and permutations. The goal of these models is to develop a mechanism to generate prototypes in order to facilitate generation of various facade skins.

Keywords: facade skin, grasshopper, high-rise residential building, shape grammar

Procedia PDF Downloads 498
3942 Development of Time Series Forecasting Model for Dengue Cases in Nakhon Si Thammarat, Southern Thailand

Authors: Manit Pollar

Abstract:

Identifying the dengue epidemic periods early would be helpful to take necessary actions to prevent the dengue outbreaks. Providing an accurate prediction on dengue epidemic seasons will allow sufficient time to take the necessary decisions and actions to safeguard the situation for local authorities. This study aimed to develop a forecasting model on number of dengue incidences in Nakhon Si Thammarat Province, Southern Thailand using time series analysis. We develop Seasonal Autoregressive Moving Average (SARIMA) models on the monthly data collected between 2003-2011 and validated the models using data collected between January-September 2012. The result of this study revealed that the SARIMA(1,1,0)(1,2,1)12 model closely described the trends and seasons of dengue incidence and confirmed the existence of dengue fever cases in Nakhon Si Thammarat for the years between 2003-2011. The study showed that the one-step approach for predicting dengue incidences provided significantly more accurate predictions than the twelve-step approach. The model, even if based purely on statistical data analysis, can provide a useful basis for allocation of resources for disease prevention.

Keywords: SARIMA, time series model, dengue cases, Thailand

Procedia PDF Downloads 344
3941 Association Between Short-term NOx Exposure and Asthma Exacerbations in East London: A Time Series Regression Model

Authors: Hajar Hajmohammadi, Paul Pfeffer, Anna De Simoni, Jim Cole, Chris Griffiths, Sally Hull, Benjamin Heydecker

Abstract:

Background: There is strong interest in the relationship between short-term air pollution exposure and human health. Most studies in this field focus on serious health effects such as death or hospital admission, but air pollution exposure affects many people with less severe impacts, such as exacerbations of respiratory conditions. A lack of quantitative analysis and inconsistent findings suggest improved methodology is needed to understand these effectsmore fully. Method: We developed a time series regression model to quantify the relationship between daily NOₓ concentration and Asthma exacerbations requiring oral steroids from primary care settings. Explanatory variables include daily NOₓ concentration measurements extracted from 8 available background and roadside monitoring stations in east London and daily ambient temperature extracted for London City Airport, located in east London. Lags of NOx concentrations up to 21 days (3 weeks) were used in the model. The dependent variable was the daily number of oral steroid courses prescribed for GP registered patients with asthma in east London. A mixed distribution model was then fitted to the significant lags of the regression model. Result: Results of the time series modelling showed a significant relationship between NOₓconcentrations on each day and the number of oral steroid courses prescribed in the following three weeks. In addition, the model using only roadside stations performs better than the model with a mixture of roadside and background stations.

Keywords: air pollution, time series modeling, public health, road transport

Procedia PDF Downloads 131
3940 Short Life Cycle Time Series Forecasting

Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar

Abstract:

The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.

Keywords: forecast, short life cycle product, structured judgement, time series

Procedia PDF Downloads 343
3939 Influence of Water Reservoir Parameters on the Climate and Coastal Areas

Authors: Lia Matchavariani

Abstract:

Water reservoir construction on the rivers flowing into the sea complicates the coast protection, seashore starts to degrade causing coast erosion and disaster on the backdrop of current climate change. The instruments of the impact of a water reservoir on the climate and coastal areas are its contact surface with the atmosphere and the area irrigated with its water or humidified with infiltrated waters. The Black Sea coastline is characterized by the highest ecological vulnerability. The type and intensity of the water reservoir impact are determined by its morphometry, type of regulation, level regime, and geomorphological and geological characteristics of the adjoining area. Studies showed the impact of the water reservoir on the climate, on its comfort parameters is positive if it is located in the zone of insufficient humidity and vice versa, is negative if the water reservoir is found in the zone with abundant humidity. There are many natural and anthropogenic factors determining the peculiarities of the impact of the water reservoir on the climate, which can be assessed with maximum accuracy by the so-called “long series” method, which operates on the meteorological elements (temperature, wind, precipitations, etc.) with the long series formed with the stationary observation data. This is the time series, which consists of two periods with statistically sufficient duration. The first period covers the observations up to the formation of the water reservoir and another period covers the observations accomplished during its operation. If no such data are available, or their series is statistically short, “an analog” method is used. Such an analog water reservoir is selected based on the similarity of the environmental conditions. It must be located within the zone of the designed water reservoir, under similar environmental conditions, and besides, a sufficient number of observations accomplished in its coastal zone.

Keywords: coast-constituent sediment, eustasy, meteorological parameters, seashore degradation, water reservoirs impact

Procedia PDF Downloads 37
3938 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 250
3937 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 93
3936 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 180
3935 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 269
3934 Hybrid Wind Solar Gas Reliability Optimization Using Harmony Search under Performance and Budget Constraints

Authors: Meziane Rachid, Boufala Seddik, Hamzi Amar, Amara Mohamed

Abstract:

Today’s energy industry seeks maximum benefit with maximum reliability. In order to achieve this goal, design engineers depend on reliability optimization techniques. This work uses a harmony search algorithm (HS) meta-heuristic optimization method to solve the problem of wind-Solar-Gas power systems design optimization. We consider the case where redundant electrical components are chosen to achieve a desirable level of reliability. The electrical power components of the system are characterized by their cost, capacity and reliability. The reliability is considered in this work as the ability to satisfy the consumer demand which is represented as a piecewise cumulative load curve. This definition of the reliability index is widely used for power systems. The proposed meta-heuristic seeks for the optimal design of series-parallel power systems in which a multiple choice of wind generators, transformers and lines are allowed from a list of product available in the market. Our approach has the advantage to allow electrical power components with different parameters to be allocated in electrical power systems. To allow fast reliability estimation, a universal moment generating function (UMGF) method is applied. A computer program has been developed to implement the UMGF and the HS algorithm. An illustrative example is presented.

Keywords: reliability optimization, harmony search optimization (HSA), universal generating function (UMGF)

Procedia PDF Downloads 566
3933 Impact of Mixed Prey Population on Predation Potential and Food Preference of a Predaceous Ladybird, Coccinella septempunctata

Authors: Ahmad Pervez

Abstract:

We investigated predation potential and food preference of different life stages of a predaceous ladybird Coccinella septempunctata L. (Coleptera: Coccinellidae) using a nutritive food (mustard aphid, Lipaphis erysimi) and a toxic food (cabbage aphid, Brevicoryne brassicae). We gave monotypic prey, L. erysimi, then B. brassicae to all life stages and found that second, third and fourth instars and adult female C. septempunctata daily consumed greater number of former prey. However, the first instar and the adult male equally consumed both the prey. In choice condition, each larva, adult male and female consumed mixed aphid diet separately in three proportions (i.e. low: high, equal: equal and high: low densities of L. erysimi: B. brassicae). We hypothesized that life stages of C. septempunctata will prefer L. erysimi regardless of its proportions. Laboratory experiment supported this hypothesis only at the adult level showing high values of β and C preference indices. However, it rejects this hypothesis at the larval level, as larvae preferred B. brassicae in certain combinations and showed no preference in a few combinations. We infer that mixing of nutritive diet in a toxic diet may possibly overcome the probable nutritive deficiency and/or reduces the toxicity of toxic diet, especially to the larvae of C. septempunctata. Consumption of high proportion of B. brassicae mixed with fewer L. erysimi suggests that mixed diet could be better for the development of immature stages of C. septempunctata.

Keywords: Coccinella septempunctata, predatory potential, prey preference, Lipaphis erysimi, Brevicoryne brassicae

Procedia PDF Downloads 184
3932 Petra: Simplified, Scalable Verification Using an Object-Oriented, Compositional Process Calculus

Authors: Aran Hakki, Corina Cirstea, Julian Rathke

Abstract:

Formal methods are yet to be utilized in mainstream software development due to issues in scaling and implementation costs. This work is about developing a scalable, simplified, pragmatic, formal software development method with strong correctness properties and guarantees that are easy prove. The method aims to be easy to learn, use and apply without extensive training and experience in formal methods. Petra is proposed as an object-oriented, process calculus with composable data types and sequential/parallel processes. Petra has a simple denotational semantics, which includes a definition of Correct by Construction. The aim is for Petra is to be standard which can be implemented to execute on various mainstream programming platforms such as Java. Work towards an implementation of Petra as a Java EDSL (Embedded Domain Specific Language) is also discussed.

Keywords: compositionality, formal method, software verification, Java, denotational semantics, rewriting systems, rewriting semantics, parallel processing, object-oriented programming, OOP, programming language, correct by construction

Procedia PDF Downloads 128