Search results for: large woody debris
6546 Simulation as a Problem-Solving Spotter for System Reliability
Authors: Wheyming Tina Song, Chi-Hao Hong, Peisyuan Lin
Abstract:
An important performance measure for stochastic manufacturing networks is the system reliability, defined as the probability that the production output meets or exceeds a specified demand. The system parameters include the capacity of each workstation and numbers of the conforming parts produced in each workstation. We establish that eighteen archival publications, containing twenty-one examples, provide incorrect values of the system reliability. The author recently published the Song Rule, which provides the correct analytical system-reliability value; it is, however, computationally inefficient for large networks. In this paper, we use Monte Carlo simulation (implemented in C and Flexsim) to provide estimates for the above-mentioned twenty-one examples. The simulation estimates are consistent with the analytical solution for small networks but is computationally efficient for large networks. We argue here for three advantages of Monte Carlo simulation: (1) understanding stochastic systems, (2) validating analytical results, and (3) providing estimates even when analytical and numerical approaches are overly expensive in computation. Monte Carlo simulation could have detected the published analysis errors.Keywords: Monte Carlo simulation, analytical results, leading digit rule, standard error
Procedia PDF Downloads 3626545 Flood Monitoring in the Vietnamese Mekong Delta Using Sentinel-1 SAR with Global Flood Mapper
Authors: Ahmed S. Afifi, Ahmed Magdy
Abstract:
Satellite monitoring is an essential tool to study, understand, and map large-scale environmental changes that affect humans, climate, and biodiversity. The Sentinel-1 Synthetic Aperture Radar (SAR) instrument provides a high collection of data in all-weather, short revisit time, and high spatial resolution that can be used effectively in flood management. Floods occur when an overflow of water submerges dry land that requires to be distinguished from flooded areas. In this study, we use global flood mapper (GFM), a new google earth engine application that allows users to quickly map floods using Sentinel-1 SAR. The GFM enables the users to adjust manually the flood map parameters, e.g., the threshold for Z-value for VV and VH bands and the elevation and slope mask threshold. The composite R:G:B image results by coupling the bands of Sentinel-1 (VH:VV:VH) reduces false classification to a large extent compared to using one separate band (e.g., VH polarization band). The flood mapping algorithm in the GFM and the Otsu thresholding are compared with Sentinel-2 optical data. And the results show that the GFM algorithm can overcome the misclassification of a flooded area in An Giang, Vietnam.Keywords: SAR backscattering, Sentinel-1, flood mapping, disaster
Procedia PDF Downloads 1056544 Towards a Large Scale Deep Semantically Analyzed Corpus for Arabic: Annotation and Evaluation
Authors: S. Alansary, M. Nagi
Abstract:
This paper presents an approach of conducting semantic annotation of Arabic corpus using the Universal Networking Language (UNL) framework. UNL is intended to be a promising strategy for providing a large collection of semantically annotated texts with formal, deep semantics rather than shallow. The result would constitute a semantic resource (semantic graphs) that is editable and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles and rhetorical relations, into a single semantic formalism for knowledge representation. The paper will also present the Interactive Analysis tool for automatic semantic annotation (IAN). In addition, the cornerstone of the proposed methodology which are the disambiguation and transformation rules, will be presented. Semantic annotation using UNL has been applied to a corpus of 20,000 Arabic sentences representing the most frequent structures in the Arabic Wikipedia. The representation, at different linguistic levels was illustrated starting from the morphological level passing through the syntactic level till the semantic representation is reached. The output has been evaluated using the F-measure. It is 90% accurate. This demonstrates how powerful the formal environment is, as it enables intelligent text processing and search.Keywords: semantic analysis, semantic annotation, Arabic, universal networking language
Procedia PDF Downloads 5826543 3D Numerical Modelling of a Pulsed Pumping Process of a Large Dense Non-Aqueous Phase Liquid Pool: In situ Pilot-Scale Case Study of Hexachlorobutadiene in a Keyed Enclosure
Authors: Q. Giraud, J. Gonçalvès, B. Paris
Abstract:
Remediation of dense non-aqueous phase liquids (DNAPLs) represents a challenging issue because of their persistent behaviour in the environment. This pilot-scale study investigates, by means of in situ experiments and numerical modelling, the feasibility of the pulsed pumping process of a large amount of a DNAPL in an alluvial aquifer. The main compound of the DNAPL is hexachlorobutadiene, an emerging organic pollutant. A low-permeability keyed enclosure was built at the location of the DNAPL source zone in order to isolate a finite undisturbed volume of soil, and a 3-month pulsed pumping process was applied inside the enclosure to exclusively extract the DNAPL. The water/DNAPL interface elevation at both the pumping and observation wells and the cumulated pumped volume of DNAPL were also recorded. A total volume of about 20m³ of purely DNAPL was recovered since no water was extracted during the process. The three-dimensional and multiphase flow simulator TMVOC was used, and a conceptual model was elaborated and generated with the pre/post-processing tool mView. Numerical model consisted of 10 layers of variable thickness and 5060 grid cells. Numerical simulations reproduce the pulsed pumping process and show an excellent match between simulated, and field data of DNAPL cumulated pumped volume and a reasonable agreement between modelled and observed data for the evolution of the water/DNAPL interface elevations at the two wells. This study offers a new perspective in remediation since DNAPL pumping system optimisation may be performed where a large amount of DNAPL is encountered.Keywords: dense non-aqueous phase liquid (DNAPL), hexachlorobutadiene, in situ pulsed pumping, multiphase flow, numerical modelling, porous media
Procedia PDF Downloads 1746542 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 1066541 Inkjet Printed Silver Nanowire Network as Semi-Transparent Electrode for Organic Photovoltaic Devices
Authors: Donia Fredj, Marie Parmentier, Florence Archet, Olivier Margeat, Sadok Ben Dkhil, Jorg Ackerman
Abstract:
Transparent conductive electrodes (TCEs) or transparent electrodes (TEs) are a crucial part of many electronic and optoelectronic devices such as touch panels, liquid crystal displays (LCDs), organic light-emitting diodes (OLEDs), solar cells, and transparent heaters. The indium tin oxide (ITO) electrode is the most widely utilized transparent electrode due to its excellent optoelectrical properties. However, the drawbacks of ITO, such as the high cost of this material, scarcity of indium, and the fragile nature, limit the application in large-scale flexible electronic devices. Importantly, flexibility is becoming more and more attractive since flexible electrodes have the potential to open new applications which require transparent electrodes to be flexible, cheap, and compatible with large-scale manufacturing methods. So far, several materials as alternatives to ITO have been developed, including metal nanowires, conjugated polymers, carbon nanotubes, graphene, etc., which have been extensively investigated for use as flexible and low-cost electrodes. Among them, silver nanowires (AgNW) are one of the promising alternatives to ITO thanks to their excellent properties, high electrical conductivity as well as desirable light transmittance. In recent years, inkjet printing became a promising technique for large-scale printed flexible and stretchable electronics. However, inkjet printing of AgNWs still presents many challenges. In this study, a synthesis of stable AgNW that could compete with ITO was developed. This material was printed by inkjet technology directly on a flexible substrate. Additionally, we analyzed the surface microstructure, optical and electrical properties of the printed AgNW layers. Our further research focused on the study of all inkjet-printed organic modules with high efficiency.Keywords: transparent electrodes, silver nanowires, inkjet printing, formulation of stable inks
Procedia PDF Downloads 2226540 Libyan Residents in Britain and Identity of Place
Authors: Intesar Ibrahim
Abstract:
Large-scale Libyan emigration is a relatively new phenomenon. Most of the Libyan families in the UK are new immigrants, unlike the other neighbouring countries of Egypt, Tunisia, Algeria and even Sudan. Libyans have no particular history of large-scale migration. On the other hand, many Libyan families live in modest homes located in large Muslim communities of Pakistanis and Yemenis. In the UK as a whole, there are currently 16 Libyan schools most of which are run during the weekend for children of school age. There are three such weekend schools in Sheffield that teach a Libyan school curriculum, and Libyan women and men run these schools. Further, there is also a Masjid (mosque) that is operated by Libyans, beside the other Masjids in the city, which most of the Libyan community attend for prayer and for other activities such as writing marriage contracts. The presence of this Masjid increases the attraction for Libyans to reside in the Sheffield area. This paper studies how Libyan immigrants in the UK make their decisions on their housing and living environment in the UK. Libyan residents in the UK come from different Libyan regions, social classes and lifestyles; this may have an impact on their choices in the interior designs of their houses in the UK. A number of case studies were chosen from Libyan immigrants who came from different types of dwellings in Libya, in order to compare with their homes and their community lifestyle in the UK and those in Libya. This study explores the meaning and the ways of using living rooms in Libyan emigrants’ houses in the UK and compares those with those in their houses back in their home country. For example, the way they set up furniture in rooms acts as an indicator of the hierarchical structure of society. The design of furniture for Libyan sitting rooms for floor-seating is different from that of the traditional English sitting room. The paper explores the identity and cultural differences that affected the style and design of the living rooms for Libyan immigrants in the UK. The study is carried out based on the "production of space" theory that any culture has its needs, style of living and way of thinking. I argue that the study found more than 70% of Libyan immigrants in the UK still furnish the living room in their traditional way (flooring seating).Keywords: place, identity, culture, immigrants
Procedia PDF Downloads 2856539 Abnormal Features of Two Quasiparticle Rotational Bands in Rare Earths
Authors: Kawalpreet Kalra, Alpana Goel
Abstract:
The behaviour of the rotational bands should be smooth but due to large amount of inertia and decreased pairing it is not so. Many experiments have been done in the last few decades, and a large amount of data is available for comprehensive study in this region. Peculiar features like signature dependence, signature inversion, and signature reversal are observed in many two quasiparticle rotational bands of doubly odd and doubly even nuclei. At high rotational frequencies, signature and parity are the only two good quantum numbers available to label a state. Signature quantum number is denoted by α. Even-angular momentum states of a rotational band have α =0, and the odd-angular momentum states have α =1. It has been observed that the odd-spin members lie lower in energy up to a certain spin Ic; the normal signature dependence is restored afterwards. This anomalous feature is termed as signature inversion. The systematic of signature inversion in high-j orbitals for doubly odd rare earth nuclei have been done. Many unusual features like signature dependence, signature inversion and signature reversal are observed in rotational bands of even-even/odd-odd nuclei. Attempts have been made to understand these phenomena using several models. These features have been analyzed within the framework of the Two Quasiparticle Plus Rotor Model (TQPRM).Keywords: rotational bands, signature dependence, signature quantum number, two quasiparticle
Procedia PDF Downloads 1686538 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 3896537 Thermodynamic Evaluation of Coupling APR-1400 with a Thermal Desalination Plant
Authors: M. Gomaa Abdoelatef, Robert M. Field, Lee, Yong-Kwan
Abstract:
Growing human populations have placed increased demands on water supplies and a heightened interest in desalination infrastructure. Key elements of the economics of desalination projects are thermal and electrical inputs. With growing concerns over the use of fossil fuels to (indirectly) supply these inputs, coupling of desalination with nuclear power production represents a significant opportunity. Individually, nuclear and desalination technologies have a long history and are relatively mature. For desalination, Reverse Osmosis (RO) has the lowest energy inputs. However, the economically driven output quality of the water produced using RO, which uses only electrical inputs, is lower than the output water quality from thermal desalination plants. Therefore, modern desalination projects consider that RO should be coupled with thermal desalination technologies (MSF, MED, or MED-TVC) with attendant steam inputs to permit blending to produce various qualities of water. A large nuclear facility is well positioned to dispatch large quantities of both electrical and thermal power. This paper considers the supply of thermal energy to a large desalination facility to examine heat balance impact on the nuclear steam cycle. The APR1400 nuclear plant is selected as prototypical from both a capacity and turbine cycle heat balance perspective to examine steam supply and the impact on electrical output. Extraction points and quantities of steam are considered parametrically along with various types of thermal desalination technologies to form the basis for further evaluations of economically optimal approaches to the interface of nuclear power production with desalination projects. In our study, the thermodynamic evaluation will be executed by DE-TOP which is the IAEA desalination program, it is approved to be capable of analyzing power generation systems coupled to desalination systems through various steam extraction positions, taking into consideration the isolation loop between the APR-1400 and the thermal desalination plant for safety concern.Keywords: APR-1400, desalination, DE-TOP, IAEA, MSF, MED, MED-TVC, RO
Procedia PDF Downloads 5326536 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites
Authors: Sarra Haouala, Issam Doghri
Abstract:
In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization
Procedia PDF Downloads 3696535 Supersymmetry versus Compositeness: 2-Higgs Doublet Models Tell the Story
Authors: S. De Curtis, L. Delle Rose, S. Moretti, K. Yagyu
Abstract:
Supersymmetry and compositeness are the two prevalent paradigms providing both a solution to the hierarchy problem and motivation for a light Higgs boson state. An open door towards the solution is found in the context of 2-Higgs Doublet Models (2HDMs), which are necessary to supersymmetry and natural within compositeness in order to enable Electro-Weak Symmetry Breaking. In scenarios of compositeness, the two isospin doublets arise as pseudo Nambu-Goldstone bosons from the breaking of SO(6). By calculating the Higgs potential at one-loop level through the Coleman-Weinberg mechanism from the explicit breaking of the global symmetry induced by the partial compositeness of fermions and gauge bosons, we derive the phenomenological properties of the Higgs states and highlight the main signatures of this Composite 2-Higgs Doublet Model at the Large Hadron Collider. These include modifications to the SM-like Higgs couplings as well as production and decay channels of heavier Higgs bosons. We contrast the properties of this composite scenario to the well-known ones established in supersymmetry, with the MSSM being the most notorious example. We show how 2HDM spectra of masses and couplings accessible at the Large Hadron Collider may allow one to distinguish between the two paradigms.Keywords: beyond the standard model, composite Higgs, supersymmetry, Two-Higgs Doublet Model
Procedia PDF Downloads 1266534 A Polynomial Approach for a Graphical-based Integrated Production and Transport Scheduling with Capacity Restrictions
Authors: M. Ndeley
Abstract:
The performance of global manufacturing supply chains depends on the interaction of production and transport processes. Currently, the scheduling of these processes is done separately without considering mutual requirements, which leads to no optimal solutions. An integrated scheduling of both processes enables the improvement of supply chain performance. The integrated production and transport scheduling problem (PTSP) is NP-hard, so that heuristic methods are necessary to efficiently solve large problem instances as in the case of global manufacturing supply chains. This paper presents a heuristic scheduling approach which handles the integration of flexible production processes with intermodal transport, incorporating flexible land transport. The method is based on a graph that allows a reformulation of the PTSP as a shortest path problem for each job, which can be solved in polynomial time. The proposed method is applied to a supply chain scenario with a manufacturing facility in South Africa and shipments of finished product to customers within the Country. The obtained results show that the approach is suitable for the scheduling of large-scale problems and can be flexibly adapted to different scenarios.Keywords: production and transport scheduling problem, graph based scheduling, integrated scheduling
Procedia PDF Downloads 4746533 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends
Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment
Procedia PDF Downloads 3396532 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering
Authors: Emiel Caron
Abstract:
Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics
Procedia PDF Downloads 1946531 Image Based Landing Solutions for Large Passenger Aircraft
Authors: Thierry Sammour Sawaya, Heikki Deschacht
Abstract:
In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing
Procedia PDF Downloads 1016530 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization
Procedia PDF Downloads 3006529 The Neurofunctional Dissociation between Animal and Tool Concepts: A Network-Based Model
Authors: Skiker Kaoutar, Mounir Maouene
Abstract:
Neuroimaging studies have shown that animal and tool concepts rely on distinct networks of brain areas. Animal concepts depend predominantly on temporal areas while tool concepts rely on fronto-temporo-parietal areas. However, the origin of this neurofunctional distinction for processing animal and tool concepts remains still unclear. Here, we address this question from a network perspective suggesting that the neural distinction between animals and tools might reflect the differences in their structural semantic networks. We build semantic networks for animal and tool concepts derived from McRae and colleagues’s behavioral study conducted on a large number of participants. These two networks are thus analyzed through a large number of graph theoretical measures for small-worldness: centrality, clustering coefficient, average shortest path length, as well as resistance to random and targeted attacks. The results indicate that both animal and tool networks have small-world properties. More importantly, the animal network is more vulnerable to targeted attacks compared to the tool network a result that correlates with brain lesions studies.Keywords: animals, tools, network, semantics, small-worls, resilience to damage
Procedia PDF Downloads 5436528 About Multi-Resolution Techniques for Large Eddy Simulation of Reactive Multi-Phase Flows
Authors: Giacomo Rossi, Bernardo Favini, Eugenio Giacomazzi, Franca Rita Picchia, Nunzio Maria Salvatore Arcidiacono
Abstract:
A numerical technique for mesh refinement in the HeaRT (Heat Release and Transfer) numerical code is presented. In the CFD framework, Large Eddy Simulation (LES) approach is gaining in importance as a tool for simulating turbulent combustion processes, also if this approach has an high computational cost due to the complexity of the turbulent modeling and the high number of grid points necessary to obtain a good numerical solution. In particular, when a numerical simulation of a big domain is performed with a structured grid, the number of grid points can increase so much that the simulation becomes impossible: this problem can be overcame with a mesh refinement technique. Mesh refinement technique developed for HeaRT numerical code (a staggered finite difference code) is based on an high order reconstruction of the variables at the grid interfaces by means of a least square quasi-ENO interpolation: numerical code is written in modern Fortran (2003 standard of newer) and is parallelized using domain decomposition and message passing interface (MPI) standard.Keywords: LES, multi-resolution, ENO, fortran
Procedia PDF Downloads 3666527 Study of the Behavior of Copper Immersed in Sea Water of the Bay of Large Agadir by Electrochemical Methods
Authors: Aicha Chaouay, Lahsen Bazzi, Mustapha Hilali
Abstract:
Seawater has chemical and biological characteristics making it particularly aggressive in relation to the corrosion of many materials including copper and steels low or moderate allies. Note that these materials are widely used in the manufacture of port infrastructure in the marine environment. These structures are exposed to two types of corrosion including: general corrosion and localized corrosion caused by the presence of sulfite-reducing micro-organisms. This work contributes to the study of the problematic related to bacterial contamination of the marine environment of large Agadir and evaluating the impact of this pollution on the corrosion resistance of copper. For the realization of this work, we conducted monthly periodic draws between (October 2012 February 2013) of seawater from the Anza area of the Bay of Agadir. Thus, after each sampling, a study of the electro chemical corrosion behavior of copper was carried out. Electro chemical corrosion parameters such as the corrosion potential, the corrosion current density, the charge transfer resistance and the double layer capacity were evaluated. The electro chemical techniques used in this work are: the route potentiodynamic polarization curves and electro chemical impedance.Keywords: Bay of Agadir, microbial contamination, seawater (Morocco), corrosion, copper
Procedia PDF Downloads 5086526 Theoretical, Numerical and Experimental Assessment of Elastomeric Bearing Stability
Authors: Manuel A. Guzman, Davide Forcellini, Ricardo Moreno, Diego H. Giraldo
Abstract:
Elastomeric bearings (EB) are used in many applications, such as base isolation of bridges, seismic protection and vibration control of other structures and machinery. Their versatility is due to their particular behavior since they have different stiffness in the vertical and horizontal directions, allowing to sustain vertical loads and at the same time horizontal displacements. Therefore, vertical, horizontal and bending stiffnesses are important parameters to take into account in the design of EB. In order to acquire a proper design methodology of EB all three, theoretical, finite element analysis and experimental, approaches should be taken into account to assess stability due to different loading states, predict their behavior and consequently their effects on the dynamic response of structures, and understand complex behavior and properties of rubber-like materials respectively. In particular, the recent large-displacement theory on the stability of EB formulated by Forcellini and Kelly is validated with both numerical simulations using the finite element method, and experimental results set at the University of Antioquia in Medellin, Colombia. In this regard, this study reproduces the behavior of EB under compression loads and investigates the stability behavior with the three mentioned points of view.Keywords: elastomeric bearings, experimental tests, numerical simulations, stability, large-displacement theory
Procedia PDF Downloads 4596525 Attribute Index and Classification Method of Earthquake Damage Photographs of Engineering Structure
Authors: Ming Lu, Xiaojun Li, Bodi Lu, Juehui Xing
Abstract:
Earthquake damage phenomenon of each large earthquake gives comprehensive and profound real test to the dynamic performance and failure mechanism of different engineering structures. Cognitive engineering structure characteristics through seismic damage phenomenon are often far superior to expensive shaking table experiments. After the earthquake, people will record a variety of different types of engineering damage photos. However, a large number of earthquake damage photographs lack sufficient information and reduce their using value. To improve the research value and the use efficiency of engineering seismic damage photographs, this paper objects to explore and show seismic damage background information, which includes the earthquake magnitude, earthquake intensity, and the damaged structure characteristics. From the research requirement in earthquake engineering field, the authors use the 2008 China Wenchuan M8.0 earthquake photographs, and provide four kinds of attribute indexes and classification, which are seismic information, structure types, earthquake damage parts and disaster causation factors. The final object is to set up an engineering structural seismic damage database based on these four attribute indicators and classification, and eventually build a website providing seismic damage photographs.Keywords: attribute index, classification method, earthquake damage picture, engineering structure
Procedia PDF Downloads 7656524 Low Dose In-Line Electron Holography for 3D Atomic Resolution Tomography of Soft Materials
Authors: F. R. Chen, C. Kisielowski, D. Van Dyck
Abstract:
In principle, the latest generation aberration-corrected transmission electron microscopes (TEMs) could achieve sub-Å resolution, but there is bottleneck that hinders the final step towards revealing 3D structure. Firstly, in order to achieve a resolution around 1 Å with single atom sensitivity, the electron dose rate needs to be sufficiently large (10⁴-10⁵eÅ⁻² s⁻¹). With such large dose rate, the electron beam can induce surfaces alterations or even bulk modifications, in particular, for electron beam sensitive (soft) materials such as nm size particles, organic materials, proteins or macro-molecules. We will demonstrate methodology of low dose electron holography for observing 3D structure for soft materials such as single Oleic acid molecules at atomic resolution. The main improvement of this new type of electron holography is based on two concepts. Firstly, the total electron dose is distributed over many images obtained at different defocus values from which the electron hologram is then reconstructed. Secondly, in contrast to the current tomographic methods that require projections from several directions, the 3D structural information of the nano-object is then extracted from this one hologram obtained from only one viewing direction.Keywords: low dose electron microscopy, in-line electron holography, atomic resolution tomography, soft materials
Procedia PDF Downloads 1926523 Momentum in the Stock Exchange of Thailand
Authors: Mussa Hussaini, Supasith Chonglerttham
Abstract:
Stocks are usually classified according to their characteristics which are unique enough such that the performance of each category can be differentiated from another. The reasons behind such classifications in the financial market are sometimes financial innovation or it can also be because of finding a premium in a group of stocks with similar features. One of the major classifications in stocks market is called momentum strategy. Based on this strategy stocks are classified according to their past performances into past winners and past losers. Momentum in a stock market refers to the idea that stocks will keep moving in the same direction. In other word, stocks with rising prices (past winners stocks) will continue to rise and those stocks with falling prices (past losers stocks) will continue to fall. The performance of this classification has been well documented in numerous studies in different countries. These studies suggest that past winners tend to outperform past losers in the future. However, academic research in this direction has been limited in countries such as Thailand and to the best of our knowledge, there has been no such study in Thailand after the financial crisis of 1997. The significance of this study stems from the fact that Thailand is an open market and has been encouraging foreign investments as one of the means to enhance employment, promote economic development, and technology transfer and the main equity market in Thailand, the Stock Exchange of Thailand is a crucial channel for Foreign Investment inflow into the country. The equity market size in Thailand increased from $1.72 billion in 1984 to $133.66 billion in 1993, an increase of over 77 times within a decade. The main contribution of this paper is evidence for size category in the context of the equity market in Thailand. Almost all previous studies have focused solely on large stocks or indices. This paper extends the scope beyond large stocks and indices by including small and tiny stocks as well. Further, since there is a distinct absence of detailed academic research on momentum strategy in the Stock Exchange of Thailand after the crisis, this paper also contributes to the extension of existing literature of the study. This research is also of significance for those researchers who would like to compare the performance of this strategy in different countries and markets. In the Stock Exchange of Thailand, we examined the performance of momentum strategy from 2010 to 2014. Returns on portfolios are calculated on monthly basis. Our results on momentum strategy confirm that there is positive momentum profit in large size stocks whereas there is negative momentum profit in small size stocks during the period of 2010 to 2014. Furthermore, the equal weighted average of momentum profit of both small and large size category do not provide any indication of overall momentum profit.Keywords: momentum strategy, past loser, past winner, stock exchange of Thailand
Procedia PDF Downloads 3176522 Numerical Simulation of Supersonic Gas Jet Flows and Acoustics Fields
Authors: Lei Zhang, Wen-jun Ruan, Hao Wang, Peng-Xin Wang
Abstract:
The source of the jet noise is generated by rocket exhaust plume during rocket engine testing. A domain decomposition approach is applied to the jet noise prediction in this paper. The aerodynamic noise coupling is based on the splitting into acoustic sources generation and sound propagation in separate physical domains. Large Eddy Simulation (LES) is used to simulate the supersonic jet flow. Based on the simulation results of the flow-fields, the jet noise distribution of the sound pressure level is obtained by applying the Ffowcs Williams-Hawkings (FW-H) acoustics equation and Fourier transform. The calculation results show that the complex structures of expansion waves, compression waves and the turbulent boundary layer could occur due to the strong interaction between the gas jet and the ambient air. In addition, the jet core region, the shock cell and the sound pressure level of the gas jet increase with the nozzle size increasing. Importantly, the numerical simulation results of the far-field sound are in good agreement with the experimental measurements in directivity.Keywords: supersonic gas jet, Large Eddy Simulation(LES), acoustic noise, Ffowcs Williams-Hawkings(FW-H) equations, nozzle size
Procedia PDF Downloads 4136521 Taleghan Dam Break Numerical Modeling
Authors: Hamid Goharnejad, Milad Sadeghpoor Moalem, Mahmood Zakeri Niri, Leili Sadeghi Khalegh Abadi
Abstract:
While there are many benefits to using reservoir dams, their break leads to destructive effects. From the viewpoint of International Committee of Large Dams (ICOLD), dam break means the collapse of whole or some parts of a dam; thereby the dam will be unable to hold water. Therefore, studying dam break phenomenon and prediction of its behavior and effects reduces losses and damages of the mentioned phenomenon. One of the most common types of reservoir dams is embankment dam. Overtopping in embankment dams occurs because of flood discharge system inability in release inflows to reservoir. One of the most important issues among managers and engineers to evaluate the performance of the reservoir dam rim when sliding into the storage, creating waves is large and long. In this study, the effects of floods which caused the overtopping of the dam have been investigated. It was assumed that spillway is unable to release the inflow. To determine outflow hydrograph resulting from dam break, numerical model using Flow-3D software and empirical equations was used. Results of numerical models and their comparison with empirical equations show that numerical model and empirical equations can be used to study the flood resulting from dam break.Keywords: embankment dam break, empirical equations, Taleghan dam, Flow-3D numerical model
Procedia PDF Downloads 3216520 Exploring the Relationship between the Adoption of Environmental Processes, Policies, Techniques and Environmental Operational Performance
Authors: Renata Konadu
Abstract:
Over the last two decades, the concept of environmental management and its related issues have received increased attention in global discourse and on management research agenda due to climate change and other environmental challenges. To abate and avert these challenges, diverse environmental policies, strategies and practices have been adopted by businesses and economies as a whole. Extant literature has placed much emphasis on whether improved environmental operational performance improves firm performance. However, there is a huge gap in the literature with regards to whether the adoption of environmental management practices and policies has a direct relationship with environmental operational performance (EOP). The current paper is intended to provide a comprehensive perspective of how different aspects of environmental management can relate to firms EOP. Using a panel regression analysis of 149 large listed firms in the UK, the study found evidence of both negative and positive statistically significant link between some Environmental Policies (EP), Environmental Processes (EPR), Environmental Management Systems (EMS) and EOP. The findings suggest that in terms of relating EP, EPR and EMS to Greenhouse Gases (GHGs) emissions for instance, the latter should be viewed separately in Scopes 1, 2 and 3 as developed by GHG protocol. The results have useful implication for policy makers and managers when designing strategies and policies to reduce negative environmental impacts.Keywords: environmental management, environmental operational performance, GHGs, large listed firms
Procedia PDF Downloads 2546519 Comparison between Ultra-High-Performance Concrete and Ultra-High-Performance-Glass Concrete
Authors: N. A. Soliman, A. F. Omran, A. Tagnit-Hamou
Abstract:
The finely ground waste glass has successfully used by the authors to develop and patent an ecological ultra-high-performance concrete (UHPC), which was named as ultra-high-performance-glass concrete (UHPGC). After the successful development in laboratory, the current research presents a comparison between traditional UHPC and UHPGC produced using large-scale pilot plant mixer, in terms of rheology, mechanical, and durability properties. The rheology of the UHPGCs was improved due to the non-absorptive nature of the glass particles. The mechanical performance of UHPGC was comparable and very close to the traditional UHPC due to the pozzolan reactivity of the amorphous waste glass. The UHPGC has also shown excellent durability: negligible permeability (chloride-ion ≈ 20 Coulombs from the RCPT test), high abrasion resistance (volume loss index less than 1.3), and almost no freeze-thaw deterioration even after 1000 freeze-thaw cycles. The enhancement in the strength and rigidity of the UHPGC mixture can be referred to the inclusions of the glass particles that have very high strength and elastic modulus.Keywords: ground glass pozzolan, large-scale production, sustainability, ultra-high performance glass concrete
Procedia PDF Downloads 1576518 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India
Authors: Zafar Beg, Kumar Gaurav
Abstract:
The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River
Procedia PDF Downloads 1796517 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 74