Search results for: computational complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3385

Search results for: computational complexity

835 Object-Based Flow Physics for Aerodynamic Modelling in Real-Time Environments

Authors: William J. Crowther, Conor Marsh

Abstract:

Object-based flow simulation allows fast computation of arbitrarily complex aerodynamic models made up of simple objects with limited flow interactions. The proposed approach is universally applicable to objects made from arbitrarily scaled ellipsoid primitives at arbitrary aerodynamic attitude and angular rate. The use of a component-based aerodynamic modelling approach increases efficiency by allowing selective inclusion of different physics models at run-time and allows extensibility through the development of new models. Insight into the numerical stability of the model under first order fixed-time step integration schemes is provided by stability analysis of the drag component. The compute cost of model components and functions is evaluated and compared against numerical benchmarks. Model static outputs are verified against theoretical expectations and dynamic behaviour using falling plate data from the literature. The model is applied to a range of case studies to demonstrate the efficacy of its application in extensibility, ease of use, and low computational cost. Dynamically complex multi-body systems can be implemented in a transparent and efficient manner, and we successfully demonstrate large scenes with hundreds of objects interacting with diverse flow fields.

Keywords: aerodynamics, real-time simulation, low-order model, flight dynamics

Procedia PDF Downloads 70
834 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 509
833 Experimental and Computational Analysis of Glass Fiber Reinforced Plastic Beams with Piezoelectric Fibers

Authors: Selin Kunc, Srinivas Koushik Gundimeda, John A. Gallagher, Roselita Fragoudakis

Abstract:

This study investigates the behavior of Glass Fiber Reinforced Plastic (GFRP) laminated beams additionally reinforced with piezoelectric fibers. The electromechanical behavior of piezoelectric materials coupled with high strength/low weight GFRP laminated beams can have significant application in a wide range of industries. Energy scavenging through mechanical vibrations is the focus of this study, and possible applications can be seen in the automotive industry. This study examines the behavior of such composite laminates using Classical Lamination Theory (CLT) under three-point bending conditions. Fiber orientation is optimized for the desired stiffness and deflection that yield maximum energy output. Finite element models using ABAQUS/CAE are verified through experimental testing. The optimum stacking sequences examined are [0o]s, [ 0/45o]s, and [45/-45o]s. Results show the superiority of the stacking sequence [0/45o]s, providing higher strength at a lower weight, and maximum energy output. Furthermore, laminated GFRP beams additionally reinforced with piezoelectric fibers can be used under bending to not only replace metallic component while providing similar strength at a lower weight but also provide an energy output.

Keywords: classical lamination theory (CLT), energy scavenging, glass fiber reinforced plastics (GFRP), piezoelectric fibers

Procedia PDF Downloads 282
832 Modified Model for UV-Laser Corneal Ablation

Authors: Salah Hassab Elnaby, Omnia Hamdy, Aziza Ahmed Hassan, Salwa Abdelkawi, Ibrahim Abdelhalim

Abstract:

Laser corneal reshaping has been proposed as a successful treatment of many refraction disorders. However, some physical and chemical demonstrations of the laser effect upon interaction with the corneal tissue are still not fully explained. Therefore, different computational and mathematical models have been implemented to predict the depth of the ablated channel and calculate the ablation threshold and the local temperature rise. In the current paper, we present a modified model that aims to answer some of the open questions about the ablation threshold, the ablation rate, and the physical and chemical mechanisms of that action. The proposed model consists of three parts. The first part deals with possible photochemical reactions between the incident photons and various components of the cornea (collagen, water, etc.). Such photochemical reactions may end by photo-ablation or just the electronic excitation of molecules. Then a chemical reaction is responsible for the ablation threshold. Finally, another chemical reaction produces fragments that can be cleared out. The model takes into account all processes at the same time with different probabilities. Moreover, the effect of applying different laser wavelengths that have been studied before, namely the common excimer laser (193-nm) and the solid state lasers (213-nm & 266-nm), has been investigated. Despite the success and ubiquity of the ArF laser, the presented results reveal that a carefully designed 213-nm laser gives the same results with lower operational drawbacks. Moreover, the use of mode locked laser could also decrease the risk of heat generation and diffusion.

Keywords: UV lasers, mathematical model, corneal ablation, photochemical ablation

Procedia PDF Downloads 54
831 Effects of Inlet Distorted Flows on the Performance of an Axial Compressor

Authors: Asad Islam, Khalid Parvez

Abstract:

Compressor fans in modern aircraft engines are of considerate importance, as they provide majority of thrust required by the aircraft. Their challenging environment is frequently subjected to non-uniform inflow conditions. These conditions could be either due to the flight operating requirements such as take-off and landing, wake interference from aircraft fuselage or cross-flow wind conditions. So, in highly maneuverable flights regimes of fighter aircrafts affects the overall performance of an engine. Since the flow in compressor of an aircraft application is highly sensitive because of adverse pressure gradient due to different flow orientations of the aircraft. Therefore, it is prone to unstable operations. This paper presents the study that focuses on axial compressor response to inlet flow orientations for the range of angles as 0 to 15 degrees. For this purpose, NASA Rotor-37 was taken and CFD mesh was developed. The compressor characteristics map was generated for the design conditions of pressure ratio of 2.106 with the rotor operating at rotational velocity of 17188.7 rpm using CFD simulating environment of ANSYS-CFX®. The grid study was done to see the effects of mesh upon computational solution. Then, the mesh giving the best results, (when validated with the available experimental NASA’s results); was used for further distortion analysis. The flow in the inlet nozzle was given angle orientations ranging from 0 to 15 degrees. The CFD results are analyzed and discussed with respect to stall margin and flow separations due to induced distortions.

Keywords: axial compressor, distortions, angle, CFD, ANSYS-CFX®, bladegen®

Procedia PDF Downloads 428
830 Intelligent Agent-Based Model for the 5G mmWave O2I Technology Adoption

Authors: Robert Joseph M. Licup

Abstract:

The deployment of the fifth-generation (5G) mobile system through mmWave frequencies is the new solution in the requirement to provide higher bandwidth readily available for all users. The usage pattern of the mobile users has moved towards either the work from home or online classes set-up because of the pandemic. Previous mobile technologies can no longer meet the high speed, and bandwidth requirement needed, given the drastic shift of transactions to the home. The millimeter-wave (mmWave) underutilized frequency is utilized by the fifth-generation (5G) cellular networks that support multi-gigabit-per-second (Gbps) transmission. However, due to its short wavelengths, high path loss, directivity, blockage sensitivity, and narrow beamwidth are some of the technical challenges that need to be addressed. Different tools, technologies, and scenarios are explored to support network design, accurate channel modeling, implementation, and deployment effectively. However, there is a big challenge on how the consumer will adopt this solution and maximize the benefits offered by the 5G Technology. This research proposes to study the intricacies of technology diffusion, individual attitude, behaviors, and how technology adoption will be attained. The agent based simulation model shaped by the actual applications, technology solution, and related literature was used to arrive at a computational model. The research examines the different attributes, factors, and intricacies that can affect each identified agent towards technology adoption.

Keywords: agent-based model, AnyLogic, 5G O21, 5G mmWave solutions, technology adoption

Procedia PDF Downloads 78
829 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams

Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina

Abstract:

It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.

Keywords: algorithm, radiology, tuberculosis, x-rays exam

Procedia PDF Downloads 388
828 In vitro Study of Inflammatory Gene Expression Suppression of Strawberry and Blackberry Extracts

Authors: Franco Van De Velde, Debora Esposito, Maria E. Pirovani, Mary A. Lila

Abstract:

The physiology of various inflammatory diseases is a complex process mediated by inflammatory and immune cells such as macrophages and monocytes. Chronic inflammation, as observed in many cardiovascular and autoimmune disorders, occurs when the low-grade inflammatory response fails to resolve with time. Because of the complexity of the chronic inflammatory disease, major efforts have focused on identifying novel anti-inflammatory agents and dietary regimes that prevent the pro-inflammatory process at the early stage of gene expression of key pro-inflammatory mediators and cytokines. The ability of the extracts of three blackberry cultivars (‘Jumbo’, ‘Black Satin’ and ‘Dirksen’), and one strawberry cultivar (‘Camarosa’) to inhibit four well-known genetic biomarkers of inflammation: inducible nitric oxide synthase (iNOS), cyclooxynase-2 (Cox-2), interleukin-1β (IL-1β) and interleukin-6 (IL-6) in an in vitro lipopolysaccharide-stimulated murine RAW 264.7 macrophage model were investigated. Moreover, the effect of latter extracts on the intracellular reactive oxygen species (ROS) and nitric oxide (NO) production was assessed. Assay was conducted with 50 µg/mL crude extract concentration, an amount that is easily achievable in the gastrointestinal tract after berries consumption. The mRNA expression levels of Cox-2 and IL-6 were reduced consistently (more than 30%) by extracts of ‘Jumbo’ and ‘Black Satin’ blackberries. Strawberry extracts showed high reduction in mRNA expression levels of IL-6 (more than 65%) and exhibited moderate reduction in mRNA expression of Cox-2 (more than 35%). The latter behavior mirrors the intracellular ROS production of the LPS stimulated RAW 264.7 macrophages after the treatment with blackberry ‘Black Satin’ and ‘Jumbo’, and strawberry ‘Camarosa’ extracts, suggesting that phytochemicals from these fruits may play a role in the health maintenance by reducing oxidative stress. On the other hand, effective inhibition in the gene expression of IL-1β and iNOS was not observed by any of blackberry and strawberry extracts. However, suppression in the NO production in the activated macrophages among 5–25% was observed by ‘Jumbo’ and ‘Black Satin’ blackberry extracts and ‘Camarosa’ strawberry extracts, suggesting a higher NO suppression property by phytochemicals of these fruits. All these results suggest the potential beneficial effects of studied berries as functional foods with antioxidant and anti-inflammatory roles. Moreover, the underlying role of phytochemicals from these fruits in the protection of inflammatory process will deserve to be further explored.

Keywords: cyclooxygenase-2, functional foods, interleukin-6, reactive oxygen species

Procedia PDF Downloads 213
827 Computer Simulation to Investigate Magnetic and Wave-Absorbing Properties of Iron Nanoparticles

Authors: Chuan-Wen Liu, Min-Hsien Liu, Chung-Chieh Tai, Bing-Cheng Kuo, Cheng-Lung Chen, Huazhen Shen

Abstract:

A recent surge in research on magnetic radar absorbing materials (RAMs) has presented researchers with new opportunities and challenges. This study was performed to gain a better understanding of the wave-absorbing phenomenon of magnetic RAMs. First, we hypothesized that the absorbing phenomenon is dependent on the particle shape. Using the Material Studio program and the micro-dot magnetic dipoles (MDMD) method, we obtained results from magnetic RAMs to support this hypothesis. The total MDMD energy of disk-like iron particles was greater than that of spherical iron particles. In addition, the particulate aggregation phenomenon decreases the wave-absorbance, according to both experiments and computational data. To conclude, this study may be of importance in terms of explaining the wave- absorbing characteristic of magnetic RAMs. Combining molecular dynamics simulation results and the theory of magnetization of magnetic dots, we investigated the magnetic properties of iron materials with different particle shapes and degrees of aggregation under external magnetic fields. The MDMD of the materials under magnetic fields of various strengths were simulated. Our results suggested that disk-like iron particles had a better magnetization than spherical iron particles. This result could be correlated with the magnetic wave- absorbing property of iron material.

Keywords: wave-absorbing property, magnetic material, micro-dot magnetic dipole, particulate aggregation

Procedia PDF Downloads 465
826 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 300
825 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 201
824 Using Manipulating Urban Layouts to Enhance Ventilation and Thermal Comfort in Street Canyons

Authors: Su Ying-Ming

Abstract:

High density of high rise buildings in urban areas lead to a deteriorative Urban Heat Island Effect, gradually. This study focuses on discussing the relationship between urban layout and ventilation comfort in street canyons. This study takes Songjiang Nanjing Rd. area of Taipei, Taiwan as an example to evaluate the wind environment comfort index by field measurement and Computational Fluid Dynamics (CFD) to improve both the quality and quantity of the environment. In this study, different factors including street blocks size, the width of buildings, street width ratio and the direction of the wind were used to discuss the potential of ventilation. The environmental wind field was measured by the environmental testing equipment, Testo 480. Evaluation of blocks sizes, the width of buildings, street width ratio and the direction of the wind was made under the condition of constant floor area with the help of Stimulation CFD to adjust research methods for optimizing regional wind environment. The results of this study showed the width of buildings influences the efficiency of outdoor ventilation; improvement of the efficiency of ventilation with large street width was also shown. The study found that Block width and H/D value and PR value has a close relationship. Furthermore, this study showed a significant relationship between the alteration of street block geometry and outdoor comfortableness.

Keywords: urban ventilation path, ventilation efficiency indices, CFD, building layout

Procedia PDF Downloads 367
823 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 91
822 Biomechanical Study of a Type II Superior Labral Anterior to Posterior Lesion in the Glenohumeral Joint Using Finite Element Analysis

Authors: Javier A. Maldonado E., Duvert A. Puentes T., Diego F. Villegas B.

Abstract:

The SLAP lesion (Superior Labral Anterior to Posterior) involves the labrum, causing pain and mobility problems in the glenohumeral joint. This injury is common in athletes practicing sports that requires throwing or those who receive traumatic impacts on the shoulder area. This paper determines the biomechanical behavior of soft tissues of the glenohumeral joint when type II SLAP lesion is present. This pathology is characterized for a tear in the superior labrum which is simulated in a 3D model of the shoulder joint. A 3D model of the glenohumeral joint was obtained using the free software Slice. Then, a Finite Element analysis was done using a general purpose software which simulates a compression test with external rotation. First, a validation was done assuming a healthy joint shoulder with a previous study. Once the initial model was validated, a lesion of the labrum built using a CAD software and the same test was done again. The results obtained were stress and strain distribution of the synovial capsule and the injured labrum. ANOVA was done for the healthy and injured glenohumeral joint finding significant differences between them. This study will help orthopedic surgeons to know the biomechanics involving this type of lesion and also the other surrounding structures affected by loading the injured joint.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, superior labral anterior to posterior lesion

Procedia PDF Downloads 183
821 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport

Procedia PDF Downloads 417
820 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 136
819 Numerical Investigation of the Bio-fouling Roughness Effect on Tidal Turbine

Authors: O. Afshar

Abstract:

Unlike other renewable energy sources, tidal current energy is an extremely reliable, predictable and continuous energy source as the current pattern and speed can be predicted throughout the year. A key concern associated with tidal turbines is their long-term reliability when operating in the hostile marine environment. Bio-fouling changes the physical shape and roughness of turbine components, hence altering the overall turbine performance. This paper seeks to employ Computational Fluid Dynamics (CFD) method to quantify the effects of this problem based on the obtained flow field information. The simulation is carried out on a NACA 63-618 aerofoil. The Reynolds Averaged Navier-Stokes (RANS) equations with Shear Stress Transport (SST) turbulent model are used to simulate the flow around the model. Different levels of fouling are studied on 2D aerofoil surface with quantified fouling height and density. In terms of lift and drag coefficient results, numerical results show good agreement with the experiment which was carried out in wind tunnel. Numerical results of research indicate that an increase in fouling thickness causes an increase in drag coefficient and a reduction in lift coefficient. Moreover, pressure gradient gradually becomes adverse as height of fouling increases. In addition, result by turbulent kinetic energy contour reveals it increases with fouling height and it extends into wake due to flow separation.

Keywords: tidal energy, lift coefficient, drag coefficient, roughness

Procedia PDF Downloads 362
818 Social Networks in Business: The Complex Concept of Wasta and the Impact of Islam on the Perception of This Practice

Authors: Sa'ad Ali

Abstract:

This study explores wasta as an example of a social network and how it impacts business practice in the Arab Middle East, drawing links with social network impact in different regions of the world. In doing so, particular attention will be paid to the socio-economic and cultural influences on business practice. In exploring relationships in business, concepts such as social network analysis, social capital and group identity are used to explore the different forms of social networks and how they influence business decisions and practices in the regions and countries where they prevail. The use of social networks to achieve objectives is known as guanxi in China, wasta in the Arab Middle East and blat in ex-Soviet countries. Wasta can be defined as favouritism based on tribal and family affiliation and is a widespread practice that has a substantial impact on political, social and business interactions in the Arab Middle East. Within the business context, it is used in several ways, such as to secure a job or promotion or to cut through bureaucracy in government interactions. The little research available is fragmented, and most studies reveal a negative attitude towards its usage in business. Paradoxically, while wasta is widely practised, people from the Arab Middle East often deny its influence. Moreover, despite the regular exhibition of a negative opinion on the practice of wasta, it can also be a source of great pride. This paper addresses this paradox by conducting a positional literature review, exploring the current literature on wasta and identifying how the identified paradox can be explained. The findings highlight how wasta, to a large extent, has been treated as an umbrella concept, whilst it is a highly complex practice which has evolved from intermediary wasta to intercessory wasta and therefore from bonding social capital relationships to more bridging social capital relationships. In addition, the research found that Islam, as the predominant religion in the region and the main source of ethical guidance for the majority of people from the region, plays a substantial role in this paradox. Specifically, it is submitted that wasta can be viewed positively in Islam when it is practised to aid others without breaking Islamic ethical guidelines, whilst it can be viewed negatively when it is used in contradiction with the teachings of Islam. As such, the unique contribution to knowledge of this study is that it ties together the fragmented literature on wasta, highlighting and helping us understand its complexity. In addition, it sheds light on the role of Islam in wasta practices, aiding our understanding of the paradoxical nature of the practice.

Keywords: Islamic ethics, social capital, social networks, Wasta

Procedia PDF Downloads 120
817 Exploring the Vocabulary and Grammar Advantage of US American over British English Speakers at Age 2;0

Authors: Janine Just, Kerstin Meints

Abstract:

The research aims to compare vocabulary size and grammatical development between US American English- and British English-speaking children at age 2;0. As there is evidence that precocious children with large vocabularies develop grammar skills earlier than their typically developing peers, it was investigated if this also holds true across varieties of English. Thus, if US American children start to produce words earlier than their British counterparts, this could mean that US children are also at an advantage in the early developmental stages of acquiring grammar. This research employs a British English adaptation of the MacArthur-Bates CDI Words and Sentences (Lincoln Toddler CDI) to compare vocabulary and also grammar scores with the updated US Toddler CDI norms. At first, the Lincoln TCDI was assessed for its concurrent validity with the Preschool Language Scale (PLS-5 UK). This showed high correlations for the vocabulary and grammar subscales between the tests. In addition, the frequency of the Toddler CDI’s words was also compared using American and British English corpora of adult spoken and written language. A paired-samples t-test found a significant difference in word frequency between the British and the American CDI demonstrating that the TCDI’s words were indeed of higher frequency in British English. We then compared language and grammar scores between US (N = 135) and British children (N = 96). A two-way between groups ANOVA examined if the two samples differed in terms of SES (i.e. maternal education) by investigating the impact of SES and country on vocabulary and sentence complexity. The two samples did not differ in terms of maternal education as the interaction effects between SES and country were not significant. In most cases, scores were not significantly different between US and British children, for example, for overall word production and most grammatical subscales (i.e. use of words, over- regularizations, complex sentences, word combinations). However, in-depth analysis showed that US children were significantly better than British children at using some noun categories (i.e. people, objects, places) and several categories marking early grammatical development (i.e. pronouns, prepositions, quantifiers, helping words). However, the effect sizes were small. Significant differences for grammar were found for irregular word forms and progressive tense suffixes. US children were more advanced in their use of these grammatical categories, but the effect sizes were small. In sum, while differences exist in terms of vocabulary and grammar ability, favouring US children, effect sizes were small. It can be concluded that most British children are ‘catching up’ with their US American peers at age 2;0. Implications of this research will be discussed.

Keywords: first language acquisition, grammar, parent report instrument, vocabulary

Procedia PDF Downloads 252
816 Implications of Meteorological Parameters in Decision Making for Public Protective Actions during a Nuclear Emergency

Authors: M. Hussaina, K. Mahboobb, S. Z. Ilyasa, S. Shaheena

Abstract:

Plume dispersion modeling is a computational procedure to establish a relationship between emissions, meteorology, atmospheric concentrations, deposition and other factors. The emission characteristics (stack height, stack diameter, release velocity, heat contents, chemical and physical properties of the gases/particle released etc.), terrain (surface roughness, local topography, nearby buildings) and meteorology (wind speed, stability, mixing height, etc.) are required for the modeling of the plume dispersion and estimation of ground and air concentration. During the early phase of Fukushima accident, plume dispersion modeling and decisions were taken for the implementation of protective measures. A difference in estimated results and decisions made by different countries for taking protective actions created a concern in local and international community regarding the exact identification of the safe zone. The current study is focused to highlight the importance of accurate and exact weather data availability, scientific approach for decision making for taking urgent protective actions, compatible and harmonized approach for plume dispersion modeling during a nuclear emergency. As a case study, the influence of meteorological data on plume dispersion modeling and decision-making process has been performed.

Keywords: decision making process, radiation doses, nuclear emergency, meteorological implications

Procedia PDF Downloads 161
815 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.

Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm

Procedia PDF Downloads 304
814 Form-Finding of Tensioned Fabric Structure in Mathematical Monkey Saddle Model

Authors: Yee Hooi Min, Abdul Hadi, M. N., A. G. Kay Dora

Abstract:

Form-finding has to be carried out for tensioned fabric structure in order to determine the initial equilibrium shape under prescribed support condition and pre-stress pattern. Tensioned fabric structures are normally designed to be in the form of equal tensioned surface. Tensioned fabric structure is highly suited to be used for realizing surfaces of complex or new forms. However, research study on a new form as a tensioned fabric structure has not attracted much attention. Another source of inspiration minimal surface which could be adopted as form for tensioned fabric structure is very crucial. The aim of this study is to propose initial equilibrium shape of tensioned fabric structures in the form of Monkey Saddle. Computational form-finding is frequently used to determine the possible form of uniformly stressed surfaces. A tensioned fabric structure must curve equally in opposite directions to give the resulting surface a three dimensional stability. In an anticlastic doubly curved surface, the sum of all positive and all negative curvatures is zero. This study provides an alternative choice for structural designer to consider the Monkey Saddle applied in tensioned fabric structures. The results on factors affecting initial equilibrium shape can serve as a reference for proper selection of surface parameter for achieving a structurally viable surface. Such in-sight will lead to improvement of rural basic infrastructure, economic gains, sustainability of built environment and green technology initiative.

Keywords: anticlastic, curvatures, form-finding, initial equilibrium shape, minimal surface, tensioned fabric structure

Procedia PDF Downloads 511
813 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices

Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto

Abstract:

The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.

Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem

Procedia PDF Downloads 99
812 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 134
811 Julia-Based Computational Tool for Composite System Reliability Assessment

Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris

Abstract:

The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.

Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow

Procedia PDF Downloads 47
810 CFD modelling of Microdrops Manipulation by Microfluidic Oscillator

Authors: Tawfiq Chekifi, Brahim Dennai, Rachid Khelfaoui

Abstract:

Over the last few decades, modeling immiscible fluids such as oil and water have been a classical research topic. Droplet-based microfluidics presents a unique platform for mixing, reaction, separation, dispersion of drops, and numerous other functions. For this purpose, several devices were studied, as well as microfluidic oscillator. The latter was obtained from wall attachment microfluidic amplifiers using a feedback loop from the outputs to the control inputs, nevertheless this device have not well used for microdrops applications. In this paper, we suggest a numerical CFD study of a microfluidic oscillator with two different lengths of feedback loop. In order to produce simultaneous microdrops of gasoil on water, a typical geometry that includes double T-junction is connected to the fluidic oscillator. The generation of microdrops is computed by volume-of-fluid method (VOF). Flow oscillations of microdrops were triggered by the Coanda effect of jet flow. The aim of work is to obtain a high oscillation frequency in output of this passive device, the influence of hydrodynamics and physics parameters on the microdrops frequency in the output of our microsystem is also analyzed, The computational results show that, the length of feedback loop, applied pressure on T-junction and interfacial tension have a significant effect on the dispersion of microdrops and its oscillation frequency. Across the range of low Reynold number, the microdrops generation and its dynamics have been accurately controlled by adjusting applying pressure ratio of two phases.

Keywords: fluidic oscillator, microdrops manipulation, VOF (volume of fluid method), microfluidic oscillator

Procedia PDF Downloads 367
809 Parametric Urbanism: A Climate Responsive Urban Form for the MENA Region

Authors: Norhan El Dallal

Abstract:

The MENA region is a challenging, rapid urbanizing region, with a special profile; culturally, socially, economically and environmentally. Despite the diversity between different countries of the MENA region they all share similar urban challenges where extensive interventions are crucial. A climate sensitive region as the MENA region requires special attention for development, adaptation and mitigation. Integrating climatic and environmental parameters into the planning process to create a responsive urban form is the aim of this research in which “Parametric Urbanism” as a trend serves as a tool to reach a more sustainable urban morphology. An attempt to parameterize the relation between the climate and the urban form in a detailed manner is the main objective of the thesis. The aim is relating the different passive approaches suitable for the MENA region with the design guidelines of each and every part of the planning phase. Various conceptual scenarios for the network pattern and block subdivision generation based on computational models are the next steps after the parameterization. These theoretical models could be applied on different climatic zones of the dense communities of the MENA region to achieve an energy efficient neighborhood or city with respect to the urban form, morphology, and urban planning pattern. A final criticism of the theoretical model is to be conducted showing the feasibility of the proposed solutions economically. Finally some push and pull policies are to be proposed to help integrate these solutions into the planning process.

Keywords: parametric urbanism, climate responsive, urban form, urban and regional studies

Procedia PDF Downloads 451
808 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 53
807 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing

Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng

Abstract:

Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.

Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware

Procedia PDF Downloads 73
806 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 325