Search results for: edge computing module
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2491

Search results for: edge computing module

1741 Evaluation of the Urban Landscape Structures and Dynamics of Hawassa City, Using Satellite Images and Spatial Metrics Approaches, Ethiopia

Authors: Berhanu Terfa, Nengcheng C.

Abstract:

The study deals with the analysis of urban expansion and land transformation of Hawass City using remote sensing data and landscape metrics during last three decades (1987–2017). Remote sensing data from Various multi-temporal satellite images viz., TM (1987), TM (1995), ETM+ (2005) and OLI (2017) were used to examine the urban expansion, growth types, and spatial isolation within the urban landscape to develop an understanding the trends of built-up growth in Hawassa City, Ethiopia. Landscape metrics and built-up density were employed to analyze the pattern, process and overall growth status. The area under investigation was divided into concentric circles with a consecutive circle of 1 km incremental radius from the central pixel (Central Business District) for analysis. The result exhibited that the built-up area had increased by 541.32% between 1987 and 2017and an extension growth types (more than 67 %) was observed. The major growth took place in north-west direction followed by north direction in haphazard manner during 1987–1995 period, whereas predominant built-up development was observed in south and southwest direction during 1995–2017 period. Land scape metrics result revealed that the of urban patches density, total edge and edge density increased, while mean nearest neighbors’ distance decreased showing the tendency of sprawl.

Keywords: landscape metrics, spatial patterns, remote sensing, multi-temporal, urban sprawl

Procedia PDF Downloads 286
1740 Analyzing Temperature and Pressure Performance of a Natural Air-Circulation System

Authors: Emma S. Bowers

Abstract:

Perturbations in global environments and temperatures have heightened the urgency of creating cost-efficient, energy-neutral building techniques. Structural responses to this thermal crisis have included designs (including those of the building standard PassivHaus) with airtightness, window placement, insulation, solar orientation, shading, and heat-exchange ventilators as potential solutions or interventions. Limitations in the predictability of the circulation of cooled air through the ambient temperature gradients throughout a structure are one of the major obstacles facing these enhanced building methods. A diverse range of air-cooling devices utilizing varying technologies is implemented around the world. Many of them worsen the problem of climate change by consuming energy. Using natural ventilation principles of air buoyancy and density to circulate fresh air throughout a building with no energy input can combat these obstacles. A unique prototype of an energy-neutral air-circulation system was constructed in order to investigate potential temperature and pressure gradients related to the stack effect (updraft of air through a building due to changes in air pressure). The stack effect principle maintains that since warmer air rises, it will leave an area of low pressure that cooler air will rush in to fill. The result is that warmer air will be expelled from the top of the building as cooler air is directed through the bottom, creating an updraft. Stack effect can be amplified by cooling the air near the bottom of a building and heating the air near the top. Using readily available, mostly recyclable or biodegradable materials, an insulated building module was constructed. A tri-part construction model was utilized: a subterranean earth-tube heat exchanger constructed of PVC pipe and placed in a horizontally oriented trench, an insulated, airtight cube aboveground to represent a building, and a solar chimney (painted black to increase heat in the out-going air). Pressure and temperature sensors were placed at four different heights within the module as well as outside, and data was collected for a period of 21 days. The air pressures and temperatures over the course of the experiment were compared and averaged. The promise of this design is that it represents a novel approach which directly addresses the obstacles of air flow and expense, using the physical principle of stack effect to draw a continuous supply of fresh air through the structure, using low-cost and readily available materials (and zero manufactured energy). This design serves as a model for novel approaches to creating temperature controlled buildings using zero energy and opens the door for future research into the effects of increasing module scale, increasing length and depth of the earth tube, and shading the building. (Model can be provided).

Keywords: air circulation, PassivHaus, stack effect, thermal gradient

Procedia PDF Downloads 154
1739 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach

Authors: Mohammad H. Almomani

Abstract:

In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.

Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization

Procedia PDF Downloads 355
1738 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 283
1737 Comparison Between Two Techniques (Extended Source to Surface Distance & Field Alignment) Of Craniospinal Irradiation (CSI) In the Eclipse Treatment Planning System

Authors: Naima Jannat, Ariful Islam, Sharafat Hossain

Abstract:

Due to the involvement of the large target volume, Craniospinal Irradiation makes it challenging to achieve a uniform dose, and it requires different isocenters. This isocentric junction needs to shift after every five fractions to overcome the possibility of hot and cold spots. This study aims to evaluate the Planning Target Volume coverage & sparing Organ at Risk between two techniques and shows that the Field Alignment Technique does not need replanning and resetting. Planning method for Craniospinal Irradiation by Eclipse treatment planning system Field Alignment and Extended Source to Surface Distance technique was developed where 36 Gy in 20 Fraction at the rate of 1.8 Gy was prescribed. The patient was immobilized in the prone position. In the Field Alignment technique, the plan consists of half beam blocked parallel opposed cranium and a single posterior cervicospine field was developed by sharing the same isocenter, which obviates divergence matching. Further, a single field was created to treat the remaining lumbosacral spine. Matching between the inferior diverging edge of the cervicospine field and the superior diverging edge of a lumbosacral field, the field alignment option was used, which automatically matches the field edge divergence as per the field alignment rule in Eclipse Treatment Planning System where the couch was set to 2700. In the Extended Source to Surface Distance technique, two parallel opposed fields were created for the cranium, and a single posterior cervicospine field was created where the Source to Surface Distance was from 120-140 cm. Dose Volume Histograms were obtained for each organ contoured and for each technique used. In all, the patient’s maximum dose to Planning Target Volume is higher for the Extended Source to Surface Distance technique to Field Alignment technique. The dose to all surrounding structures was increased with the use of a single Extended Source to Surface Distance when compared to the Field Alignment technique. The average mean dose to Eye, Brain Steam, Kidney, Oesophagus, Heart, Liver, Lung, and Ovaries were respectively (58% & 60 %), (103% & 98%), (13% & 15%), (10% & 63%), (12% & 16%), (33% & 30%), (14% & 18%), (69% & 61%) for Field Alignment and Extended Source to Surface Distance technique. However, the clinical target volume at the spine junction site received a less homogeneous dose with the Field Alignment technique as compared to Extended Source to Surface Distance. We conclude that, although the use of a single field Extended Source to Surface Distance delivered a more homogenous, but its maximum dose is higher than the Field Alignment technique. Also, a huge advantage of the Field Alignment technique for Craniospinal Irradiation is that it doesn’t need replanning and resetting up of patients after every five fractions and 95% prescribed dose was received by more than 95% of the Planning Target Volume in all the plane with the acceptable hot spot.

Keywords: craniospinalirradiation, cranium, cervicospine, immobilize, lumbosacral spine

Procedia PDF Downloads 116
1736 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications

Authors: Fayad Ghawbar

Abstract:

The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.

Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC

Procedia PDF Downloads 143
1735 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 364
1734 Pod and Wavelets Application for Aerodynamic Design Optimization

Authors: Bonchan Koo, Junhee Han, Dohyung Lee

Abstract:

The research attempts to evaluate the accuracy and efficiency of a design optimization procedure which combines wavelets-based solution algorithm and proper orthogonal decomposition (POD) database management technique. Aerodynamic design procedure calls for high fidelity computational fluid dynamic (CFD) simulations and the consideration of large number of flow conditions and design constraints. Even with significant computing power advancement, current level of integrated design process requires substantial computing time and resources. POD reduces the degree of freedom of full system through conducting singular value decomposition for various field simulations. For additional efficiency improvement of the procedure, adaptive wavelet technique is also being employed during POD training period. The proposed design procedure was applied to the optimization of wing aerodynamic performance. Throughout the research, it was confirmed that the POD/wavelets design procedure could significantly reduce the total design turnaround time and is also able to capture all detailed complex flow features as in full order analysis.

Keywords: POD (Proper Orthogonal Decomposition), wavelets, CFD, design optimization, ROM (Reduced Order Model)

Procedia PDF Downloads 467
1733 How to Use Big Data in Logistics Issues

Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy

Abstract:

Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.

Keywords: big data, logistics, operational efficiency, risk management

Procedia PDF Downloads 641
1732 Study of Lamination Quality of Semi-Flexible Solar Modules with Special Textile Materials

Authors: K. Drabczyk, Z. Starowicz, S. Maleczek, P. Zieba

Abstract:

The army, police and fire brigade commonly use dedicated equipment based on special textile materials. The properties of these textiles should ensure human life and health protection. Equally important is the ability to use electronic equipment and this requires access to the source of electricity. Photovoltaic cells integrated with such textiles can be solution for this problem in the most of outdoor circumstances. One idea may be to laminate the cells to textile without changing their properties. The main goal of this work was analyzed lamination quality of special designed semi-flexible solar module with special textile materials as a backsheet. In the first step of investigation, the quality of lamination was determined using device equipped with dynamometer. In this work, the crystalline silicon solar cells 50 x 50 mm and thin chemical tempered glass - 62 x 62 mm and 0.8 mm thick - were used. The obtained results showed the correlation between breaking force and type of textile weave and fiber. The breaking force was in the ranges: 4.5-5.5 N, 15-20 N and 30-33 N depending on the type of wave and fiber type. To verify these observations the microscopic and FTIR analysis of fibers was performed. The studies showed the special textile can be used as a backsheet of semi-flexible solar modules. This work presents a new composition of solar module with special textile layer which, to our best knowledge, has not been published so far. Moreover, the work presents original investigations on adhesion of EVA (ethylene-vinyl acetate) polymer to textile with respect to fiber structure of laminated substrate. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management.

Keywords: flexible solar modules, lamination process, solar cells, textile for photovoltaics

Procedia PDF Downloads 358
1731 R Data Science for Technology Management

Authors: Sunghae Jun

Abstract:

Technology management (TM) is important issue in a company improving the competitiveness. Among many activities of TM, technology analysis (TA) is important factor, because most decisions for management of technology are decided by the results of TA. TA is to analyze the developed results of target technology using statistics or Delphi. TA based on Delphi is depended on the experts’ domain knowledge, in comparison, TA by statistics and machine learning algorithms use objective data such as patent or paper instead of the experts’ knowledge. Many quantitative TA methods based on statistics and machine learning have been studied, and these have been used for technology forecasting, technological innovation, and management of technology. They applied diverse computing tools and many analytical methods case by case. It is not easy to select the suitable software and statistical method for given TA work. So, in this paper, we propose a methodology for quantitative TA using statistical computing software called R and data science to construct a general framework of TA. From the result of case study, we also show how our methodology is applied to real field. This research contributes to R&D planning and technology valuation in TM areas.

Keywords: technology management, R system, R data science, statistics, machine learning

Procedia PDF Downloads 458
1730 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking

Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid

Abstract:

The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.

Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module

Procedia PDF Downloads 176
1729 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 116
1728 Copywriting and the Creative Edge

Authors: Dandeswar Bisoyi, Preeti Yadav, Utpal Barua

Abstract:

This study address particular way that verbal information can affect the processing of positive and interesting qualities which help in making the brand attractive to the consumer. Also, it address the development of a communication strategy which is a very important part of the marketing plan we have to take into account many factors. Out of all the product strengths, the strategy has to outline one marked differential which will drive our brand. This is the fundamental base on which the entire creative strategy will be big idea-based.

Keywords: copy writing, advertisement, marketing, branding, recall

Procedia PDF Downloads 582
1727 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud

Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani

Abstract:

In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.

Keywords: privacy enforcement, platform-as-a-service privacy awareness, cloud computing privacy

Procedia PDF Downloads 227
1726 Objectives of the Standardization of Technical Terminology Nowadays in Albanian

Authors: Gani Pllana

Abstract:

In the conditions of the rapid development of technics and technology in recent years, the cooperation of the scientific-technical language with the standard Albanian language is continuing with a higher intensity than before. We notice a vigor of enrichment in the vocabulary of technical terminology, due to the birth and formation of new fields and subfields of technics, technology, as computing, mechatronics, telemetry, a multitude of concepts many of which, on the one hand, are marked with names of the languages they come from, mainly from English, but on the other hand, they meet their needs with the lexical mother tongue composition (by common words being raised to terms) and with the activation of other layers, such as compound word terms. Thus, for example, in the field of computing, we notice in it the inclusion of the ordinary vocabulary for reproductive reasons, like mi, dritare, flamur, adresë, skedar (Engl: mouse, window, flag, address, file), and along with them, the compound word terms, serving to differentiate relevant concepts, like, adresë e hiperlidhjes, adresë e uebit, adresë relative, adresë virtuale (Engl. address hyperlink, web address, relative address, virtual address) etc.

Keywords: common words, Albanian language, technical terminology, standardization

Procedia PDF Downloads 290
1725 Magnetic versus Non-Magnetic Adatoms in Graphene Nanoribbons: Tuning of Spintronic Applications and the Quantum Spin Hall Phase

Authors: Saurabh Basu, Sudin Ganguly

Abstract:

Conductance in graphene nanoribbons (GNR) in presence of magnetic (for example, Iron) and non-magnetic (for example, Gold) adatoms are explored theoretically within a Kane-Mele model for their possible spintronic applications and topologically non-trivial properties. In our work, we have considered the magnetic adatoms to induce a Rashba spin-orbit coupling (RSOC) and an exchange bias field, while the non-magnetic ones induce an RSOC and an intrinsic spin-orbit (SO) coupling. Even though RSOC is present in both, they, however, represent very different physical situations, where the magnetic adatoms do not preserve the time reversal symmetry, while the non-magnetic case does. This has important implications on the topological properties. For example, the non-magnetic adatoms, for moderately strong values of SO, the GNR denotes a quantum spin Hall insulator as evident from a 2e²/h plateau in the longitudinal conductance and presence of distinct conducting edge states with an insulating bulk. Since the edge states are protected by time reversal symmetry, the magnetic adatoms in GNR yield trivial insulators and do not possess any non-trivial topological property. However, they have greater utility than the non-magnetic adatoms from the point of view of spintronic applications. Owing to the broken spatial symmetry induced by the presence of adatoms of either type, all the x, y and z components of the spin-polarized conductance become non-zero (only the y-component survives in pristine Graphene owing to a mirror symmetry present there) and hence become suitable for spintronic applications. However, the values of the spin polarized conductances are at least two orders of magnitude larger in the case of magnetic adatoms than their non-magnetic counterpart, thereby ensuring more efficient spintronic applications. Further the applications are tunable by altering the adatom densities.

Keywords: magnetic and non-magnetic adatoms, quantum spin hall phase, spintronic applications, spin polarized conductance, time reversal symmetry

Procedia PDF Downloads 302
1724 Improving System Performance through User's Resource Access Patterns

Authors: K. C. Wong

Abstract:

This paper demonstrates a number of examples in the hope to shed some light on the possibility of designing future operating systems in a more adaptation-based manner. A modern operating system, we conceive, should possess the capability of 'learning' in such a way that it can dynamically adjust its services and behavior according to the current status of the environment in which it operates. In other words, a modern operating system should play a more proactive role during the session of providing system services to users. As such, a modern operating system is expected to create a computing environment, in which its users are provided with system services more matching their dynamically changing needs. The examples demonstrated in this paper show that user's resource access patterns 'learned' and determined during a session can be utilized to improve system performance and hence to provide users with a better and more effective computing environment. The paper also discusses how to use the frequency, the continuity, and the duration of resource accesses in a session to quantitatively measure and determine user's resource access patterns for the examples shown in the paper.

Keywords: adaptation-based systems, operating systems, resource access patterns, system performance

Procedia PDF Downloads 145
1723 A Genetic Algorithm for the Load Balance of Parallel Computational Fluid Dynamics Computation with Multi-Block Structured Mesh

Authors: Chunye Gong, Ming Tie, Jie Liu, Weimin Bao, Xinbiao Gan, Shengguo Li, Bo Yang, Xuguang Chen, Tiaojie Xiao, Yang Sun

Abstract:

Large-scale CFD simulation relies on high-performance parallel computing, and the load balance is the key role which affects the parallel efficiency. This paper focuses on the load-balancing problem of parallel CFD simulation with structured mesh. A mathematical model for this load-balancing problem is presented. The genetic algorithm, fitness computing, two-level code are designed. Optimal selector, robust operator, and local optimization operator are designed. The properties of the presented genetic algorithm are discussed in-depth. The effects of optimal selector, robust operator, and local optimization operator are proved by experiments. The experimental results of different test sets, DLR-F4, and aircraft design applications show the presented load-balancing algorithm is robust, quickly converged, and is useful in real engineering problems.

Keywords: genetic algorithm, load-balancing algorithm, optimal variation, local optimization

Procedia PDF Downloads 185
1722 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver

Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera

Abstract:

In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.

Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids

Procedia PDF Downloads 416
1721 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model

Authors: Abdellahi Cheikh

Abstract:

With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.

Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard

Procedia PDF Downloads 93
1720 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 153
1719 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 158
1718 Life Cycle Assessment of Mass Timber Structure, Construction Process as System Boundary

Authors: Mahboobeh Hemmati, Tahar Messadi, Hongmei Gu

Abstract:

Today, life cycle assessment (LCA) is a leading method in mitigating the environmental impacts emerging from the building sector. In this paper, LCA is used to quantify the Green House Gas (GHG) emissions during the construction phase of the largest mass timber residential structure in the United States, Adohi Hall. This building is a 200,000 square foot 708-bed complex located on the campus of the University of Arkansas. The energy used for buildings’ operation is the most dominant source of emissions in the building industry. Lately, however, the efforts were successful at increasing the efficiency of building operation in terms of emissions. As a result, the attention is now shifted to the embodied carbon, which is more noticeable in the building life cycle. Unfortunately, most of the studies have, however, focused on the manufacturing stage, and only a few have addressed to date the construction process. Specifically, less data is available about environmental impacts associated with the construction of mass timber. This study presents, therefore, an assessment of the environmental impact of the construction processes based on the real and newly built mass timber building mentioned above. The system boundary of this study covers modules A4 and A5 based on building LCA standard EN 15978. Module A4 includes material and equipment transportation. Module A5 covers the construction and installation process. This research evolves through 2 stages: first, to quantify materials and equipment deployed in the building, and second, to determine the embodied carbon associated with running equipment for construction materials, both transported to, and installed on, the site where the edifice is built. The Global Warming Potential (GWP) of the building is the primary metric considered in this research. The outcomes of this study bring to the front a better understanding of hotspots in terms of emission during the construction process. Moreover, the comparative analysis of the mass timber construction process with that of a theoretically similar steel building will enable an effective assessment of the environmental efficiency of mass timber.

Keywords: construction process, GWP, LCA, mass timber

Procedia PDF Downloads 167
1717 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks

Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann

Abstract:

This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.

Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem

Procedia PDF Downloads 129
1716 Appropriate Technology: Revisiting the Movement in Developing Countries for Sustainability

Authors: Jayshree Patnaik, Bhaskar Bhowmick

Abstract:

The economic growth of any nation is steered and dependent on innovation in technology. It can be preferably argued that technology has enhanced the quality of life. Technology is linked both with an economic and a social structure. But there are some parts of the world or communities which are yet to reap the benefits of technological innovation. Business and organizations are now well equipped with cutting-edge innovations that improve the firm performance and provide them with a competitive edge, but rarely does it have a positive impact on any community which is weak and marginalized. In recent times, it is observed that communities are actively handling social or ecological issues with the help of indigenous technologies. Thus, "Appropriate Technology" comes into the discussion, which is quite prevalent in the rural third world. Appropriate technology grew as a movement in the mid-1970s during the energy crisis, but it lost its stance in the following years when people started it to describe it as an inferior technology or dead technology. Basically, there is no such technology which is inferior or sophisticated for a particular region. The relevance of appropriate technology lies in penetrating technology into a larger and weaker section of community where the “Bottom of the pyramid” can pay for technology if they find the price is affordable. This is a theoretical paper which primarily revolves around how appropriate technology has faded and again evolved in both developed and developing countries. The paper will try to focus on the various concepts, history and challenges faced by the appropriate technology over the years. Appropriate technology follows a documented approach but lags in overall design and diffusion. Diffusion of technology into the poorer sections of community remains unanswered until the present time. Appropriate technology is multi-disciplinary in nature; therefore, this openness allows having a varied working model for different problems. Appropriate technology is a friendly technology that seeks to improve the lives of people in a constraint environment by providing an affordable and sustainable solution. Appropriate technology needs to be defined in the era of modern technological advancement for sustainability.

Keywords: appropriate technology, community, developing country, sustainability

Procedia PDF Downloads 262
1715 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 327
1714 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method

Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.

Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image

Procedia PDF Downloads 314
1713 Thermal Hydraulic Analysis of Sub-Channels of Pressurized Water Reactors with Hexagonal Array: A Numerical Approach

Authors: Md. Asif Ullah, M. A. R. Sarkar

Abstract:

This paper illustrates 2-D and 3-D simulations of sub-channels of a Pressurized Water Reactor (PWR) having hexagonal array of fuel rods. At a steady state, the temperature of outer surface of the cladding of fuel rod is kept about 1200°C. The temperature of this isothermal surface is taken as boundary condition for simulation. Water with temperature of 290°C is given as a coolant inlet to the primary water circuit which is pressurized upto 157 bar. Turbulent flow of pressurized water is used for heat removal. In 2-D model, temperature, velocity, pressure and Nusselt number distributions are simulated in a vertical sectional plane through the sub-channels of a hexagonal fuel rod assembly. Temperature, Nusselt number and Y-component of convective heat flux along a line in this plane near the end of fuel rods are plotted for different Reynold’s number. A comparison between X-component and Y-component of convective heat flux in this vertical plane is analyzed. Hexagonal fuel rod assembly has three types of sub-channels according to geometrical shape whose boundary conditions are different too. In 3-D model, temperature, velocity, pressure, Nusselt number, total heat flux magnitude distributions for all the three sub-channels are studied for a suitable Reynold’s number. A horizontal sectional plane is taken from each of the three sub-channels to study temperature, velocity, pressure, Nusselt number and convective heat flux distribution in it. Greater values of temperature, Nusselt number and Y-component of convective heat flux are found for greater Reynold’s number. X-component of convective heat flux is found to be non-zero near the bottom of fuel rod and zero near the end of fuel rod. This indicates that the convective heat transfer occurs totally along the direction of flow near the outlet. As, length to radius ratio of sub-channels is very high, simulation for a short length of the sub-channels are done for graphical interface advantage. For the simulations, Turbulent Flow (K-Є ) module and Heat Transfer in Fluids (ht) module of COMSOL MULTIPHYSICS 5.0 are used.

Keywords: sub-channels, Reynold’s number, Nusselt number, convective heat transfer

Procedia PDF Downloads 360
1712 Collaborative Stylistic Group Project: A Drama Practical Analysis Application

Authors: Omnia F. Elkommos

Abstract:

In the course of teaching stylistics to undergraduate students of the Department of English Language and Literature, Faculty of Arts and Humanities, the linguistic tool kit of theories comes in handy and useful for the better understanding of the different literary genres: Poetry, drama, and short stories. In the present paper, a model of teaching of stylistics is compiled and suggested. It is a collaborative group project technique for use in the undergraduate diverse specialisms (Literature, Linguistics and Translation tracks) class. Students initially are introduced to the different linguistic tools and theories suitable for each literary genre. The second step is to apply these linguistic tools to texts. Students are required to watch videos performing the poems or play, for example, and search the net for interpretations of the texts by other authorities. They should be using a template (prepared by the researcher) that has guided questions leading students along in their analysis. Finally, a practical analysis would be written up using the practical analysis essay template (also prepared by the researcher). As per collaborative learning, all the steps include activities that are student-centered addressing differentiation and considering their three different specialisms. In the process of selecting the proper tools, the actual application and analysis discussion, students are given tasks that request their collaboration. They also work in small groups and the groups collaborate in seminars and group discussions. At the end of the course/module, students present their work also collaboratively and reflect and comment on their learning experience. The module/course uses a drama play that lends itself to the task: ‘The Bond’ by Amy Lowell and Robert Frost. The project results in an interpretation of its theme, characterization and plot. The linguistic tools are drawn from pragmatics, and discourse analysis among others.

Keywords: applied linguistic theories, collaborative learning, cooperative principle, discourse analysis, drama analysis, group project, online acting performance, pragmatics, speech act theory, stylistics, technology enhanced learning

Procedia PDF Downloads 184