Search results for: split graphs
621 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising
Authors: Jianwei Ma, Diriba Gemechu
Abstract:
In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm
Procedia PDF Downloads 207620 Stress Concentration Trend for Combined Loading Conditions
Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo
Abstract:
Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.Keywords: stress concentration, finite element analysis, finite element models, combined loading
Procedia PDF Downloads 443619 Mathematical Toolbox for editing Equations and Geometrical Diagrams and Graphs
Authors: Ayola D. N. Jayamaha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Currently there are lot of educational tools designed for mathematics. Open source software such as GeoGebra and Octave are bulky in their architectural structure. In addition, there is MathLab software, which facilitates much more than what we ask for. Many of the computer aided online grading and assessment tools require integrating editors to their software. However, there are not exist suitable editors that cater for all their needs in editing equations and geometrical diagrams and graphs. Some of the existing software for editing equations is Alfred’s Equation Editor, Codecogs, DragMath, Maple, MathDox, MathJax, MathMagic, MathFlow, Math-o-mir, Microsoft Equation Editor, MiraiMath, OpenOffice, WIRIS Editor and MyScript. Some of them are commercial, open source, supports handwriting recognition, mobile apps, renders MathML/LaTeX, Flash / Web based and javascript display engines. Some of the diagram editors are GeoKone.NET, Tabulae, Cinderella 1.4, MyScript, Dia, Draw2D touch, Gliffy, GeoGebra, Flowchart, Jgraph, JointJS, J painter Online diagram editor and 2D sketcher. All these software are open source except for MyScript and can be used for editing mathematical diagrams. However, they do not fully cater the needs of a typical computer aided assessment tool or Educational Platform for Mathematics. This solution provides a Web based, lightweight, easy to implement and integrate solution of an html5 canvas that renders on all of the modern web browsers. The scope of the project is an editor that covers equations and mathematical diagrams and drawings on the O/L Mathematical Exam Papers in Sri Lanka. Using the tool the students can enter any equation to the system which can be on an online remote learning platform. The users can also create and edit geometrical drawings, graphs and do geometrical constructions that require only Compass and Ruler from the Editing Interface provided by the Software. The special feature of this software is the geometrical constructions. It allows the users to create geometrical constructions such as angle bisectors, perpendicular lines, angles of 600 and perpendicular bisectors. The tool correctly imitates the functioning of rulers and compasses to create the required geometrical construction. Therefore, the users are able to do geometrical drawings on the computer successfully and we have a digital format of the geometrical drawing for further processing. Secondly, we can create and edit Venn Diagrams, color them and label them. In addition, the students can draw probability tree diagrams and compound probability outcome grids. They can label and mark regions within the grids. Thirdly, students can draw graphs (1st order and 2nd order). They can mark points on a graph paper and the system connects the dots to draw the graph. Further students are able to draw standard shapes such as circles and rectangles by selecting points on a grid or entering the parametric values.Keywords: geometrical drawings, html5 canvas, mathematical equations, toolbox
Procedia PDF Downloads 375618 A Comparative Analysis Of Da’wah Methodology Applied by the Two Variant Factions of Jama’atu Izalatil Bid’ah Wa-Iqamatis Sunnah in Nigeria
Authors: Aminu Alhaji Bala
Abstract:
The Jama’atu Izalatil Bid’ah Wa-Iqamatis Sunnah is a Da’wah organization and reform movement launched in Jos - Nigeria in 1978 as a purely reform movement under the leadership of late Shaykh Ismai’la Idris. The organization started a full fledge preaching sessions at National, State and Local Government levels immediately after its formation. The contributions of this organization to da'wah activities in Nigeria are paramount. The organization conducted its preaching under the council of preaching with the help of the executives, elders and patrons of the movement. Teaching and preaching have been recognized as the major programs of the society. Its preaching activities are conducted from ward, local, state and national levels throughout the states of Nigeria and beyond. It also engaged itself in establishing Mosques, schools and offers sermons during Friday congregation and Eid days throughout its mosques where its sermon is translated into vernacular language, this attracted many Muslims who don’t understand Arabic to patronize the its activities. The organization however split into two faction due to different approaches to Da’wah methodology and some seemingly selfish interests among its leaders. It is upon this background that this research was conducted using analytical method to compare and contrast the da’wah methodology applied by the two factions of the organization. The research discussed about the formation, Da’wah activities of the organization. It also compared and contrast the Da’wah approach and methodology of the two factions. The research finding reveals that different approach and methods applied by these factions is one of the main reason of their split in addition to other selfish interest among its leaders.Keywords: activities, Da’wah, methodology, organization
Procedia PDF Downloads 220617 Study of Storms on the Javits Center Green Roof
Authors: Alexander Cho, Harsho Sanyal, Joseph Cataldo
Abstract:
A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.Keywords: green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter
Procedia PDF Downloads 113616 Numerical Analysis of Fluid Mixing in Three Split and Recombine Micromixers at Different Inlets Volume Ratio
Authors: Vladimir Viktorov, M. Readul Mahmud, Carmen Visconte
Abstract:
Numerical simulation were carried out to study the mixing of miscible liquid at different inlets volume ratio (1 to 3) within two existing mixers namely Chain, Tear-drop and one new “C-H” mixer. The new passive C-H micromixer is developed based on split and recombine principles, combining the operation concepts of known Chain mixer and H mixer. The mixing performances of the three micromixers were predicted by a preliminary numerical analysis of the flow patterns inside the channel in terms of the segregation or distribution of path lines. Afterward, the efficiency and the pressure drop were investigated numerically, taking into account species transport. All numerical calculations were computed at a wide range of Reynolds number from 1 to 100. Among the presented three micromixers, tear-drop provides fairly good efficiency except in the middle range of Re numbers but has high-pressure drop. In addition, inlets flow ratio has a significant influence on efficiency, especially at the Re number range of 10 to 50, Moreover maximum increase of efficiency is almost 10% when inlets flow ratio is increased by 1. Chain mixer presents relatively low mixing efficiency at low and middle range of Re numbers (5≤Re≤50) but has reasonable pressure drop. Furthermore, Chain mixer shows almost no dependence on inlets flow ratio. Whereas, C-H mixer poses excellent mixing efficiency (more than 93%) for all range of Re numbers and causes the lowest pressure drop, On top of that efficiency has slight dependency on inlets flow ratio. In addition, C-H mixer shows respectively about three and two times lower pressure drop than Tear-drop and Chain mixers.Keywords: CFD, micromixing, passive micromixer, SAR
Procedia PDF Downloads 481615 An Academic Theory on a Sustainable Evaluation of Achatina Fulica Within Ethekwini, KwaZulu-Natal
Authors: Sibusiso Trevor Tshabalala, Samuel Lubbe, Vince Vuledzani Ndou
Abstract:
Dependency on chemicals has had many disadvantages in pest management control strategies. Such genetic rodenticide resistance and secondary exposure risk are what is currently being experienced. Emphasis on integrated pest management suggests that to control future pests, early intervention and economic threshold development are key starting points in crop production. The significance of this research project is to help establish a relationship between Giant African Land Snail (Achatina Fulica) solution extract, its shell chemical properties, and farmer’s perceptions of biological control in eThekwini Municipality Agri-hubs. A mixed design approach to collecting data will be explored using a trial layout in the field and through interviews. The experimental area will be explored using a split-plot design that will be replicated and arranged in a randomised complete block design. The split-plot will have 0, 10, 20 and 30 liters of water to one liter of snail solution extract. Plots were 50 m² each with a spacing of 12 m between each plot and a plant spacing of 0.5 m (inter-row) ‘and 0.5 m (intra-row). Trials will be irrigated using sprinkler irrigation, with objective two being added to the mix every 4-5 days. The expected outcome will be improved soil fertility and micro-organisms population proliferation.Keywords: giant african land snail, integrated pest management, photosynthesis, genetic rodenticide resistance, control future pests, shell chemical properties
Procedia PDF Downloads 103614 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 157613 Travel Delay and Modal Split Analysis: A Case Study
Authors: H. S. Sathish, H. S. Jagadeesh, Skanda Kumar
Abstract:
Journey time and delay study is used to evaluate the quality of service, the travel time and study can also be used to evaluate the quality of traffic movement along the route and to determine the location types and extent of traffic delays. Components of delay are boarding and alighting, issue of tickets, other causes and distance between each stops. This study investigates the total journey time required to travel along the stretch and the influence the delays. The route starts from Kempegowda Bus Station to Yelahanka Satellite Station of Bangalore City. The length of the stretch is 16.5 km. Modal split analysis has been done for this stretch. This stretch has elevated highway connecting to Bangalore International Airport and the extension of metro transit stretch. From the regression analysis of total journey time it is affected by delay due to boarding and alighting moderately, Delay due to issue of tickets affects the journey time to a higher extent. Some of the delay factors affecting significantly the journey time are evident from F-test at 10 percent level of confidence. Along this stretch work trips are more prevalent as indicated by O-D study. Modal shift analysis indicates about 70 percent of commuters are ready to shift from current system to Metro Rail System. Metro Rail System carries maximum number of trips compared to private mode. Hence Metro is a highly viable choice of mode for Bangalore Metropolitan City.Keywords: delay, journey time, modal choice, regression analysis
Procedia PDF Downloads 495612 Estimation of the Mean of the Selected Population
Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal
Abstract:
Two normal populations with different means and same variance are considered, where the variances are known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the method of Monte-Carlo simulation and their performances are analysed with the help of graphs.Keywords: estimation after selection, Brewster-Zidek technique, estimators, selected populations
Procedia PDF Downloads 511611 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 344610 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 68609 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 201608 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context
Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire
Abstract:
The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.Keywords: drone operators, emotion, numerical estimation, arithmetic
Procedia PDF Downloads 115607 Effects of Duct Geometry, Thickness and Types of Liners on Transmission Loss for Absorptive Silencers
Abstract:
Sound attenuation in absorptive silencers has been analyzed in this paper. The structure of such devices is as follows. When the rigid duct of an expansion chamber has been lined by a packed absorptive material under a perforated membrane, incident sound waves will be dissipated by the absorptive liners. This kind of silencer, usually are applicable for medium to high frequency ranges. Several conditions for different absorptive materials, variety in their thicknesses, and different shapes of the expansion chambers have been studied in this paper. Also, graphs of sound attenuation have been compared between empty expansion chamber and duct of silencer with applying liner. Plane waves have been assumed in inlet and outlet regions of the silencer. Presented results that have been achieved by applying finite element method (FEM), have shown the dependence of the sound attenuation spectrum to flow resistivity and the thicknesses of the absorptive materials, and geometries of the cross section (configuration of the silencer). As flow resistivity and thickness of absorptive materials increase, sound attenuation improves. In this paper, diagrams of the transmission loss (TL) for absorptive silencers in five different cross sections (rectangle, circle, ellipse, square, and rounded rectangle as the main geometry) have been presented. Also, TL graphs for silencers using different absorptive material (glass wool, wood fiber, and kind of spongy materials) as liner with three different thicknesses of 5 mm, 15 mm, and 30 mm for glass wool liner have been exhibited. At first, the effect of substances of the absorptive materials with the specific flow resistivity and densities on the TL spectrum, then the effect of the thicknesses of the glass wool, and at last the efficacy of the shape of the cross section of the silencer have been investigated.Keywords: transmission loss, absorptive material, flow resistivity, thickness, frequency
Procedia PDF Downloads 249606 Effect of Strains and Temperature on the Twinning Behavior of High Purity Titanium Compressed by Split Hopkinson Pressure Bar
Authors: Ping Zhou, Dawu Xiao, Chunli Jiang, Ge Sang
Abstract:
Deformation twinning plays an important role in the mechanical properties of Ti which has high specific strength and excellent corrosion resistance ability. To investigate the twinning behavior of Ti under high strain rate compression, the split Hopkinson pressure bar (SHPB) was adopted to deform samples to different strains at room temperature. In addition, twinning behaviors under varied temperatures of 373K, 573K and 873K were also investigated. The cylindrical-shaped samples with purity 99.995% were annealed at 1073K for 1 hour in vacuum before compression. All the deformation twins were identified by electron backscatter diffraction (EBSD) techniques. The mechanical behavior showed three-stage work hardening in stress-strain curves for samples deformed at temperature 573K and 873K, while only two stages were observed for those deformed at room temperature. For samples compressed at room temperature, the predominant twin types are {10-12}<10-11> (E1), {11-21}<11-26> (E2) and {11-21}<11-23> (C1). The secondary and tertiary twinning was observed inside some E1, E2 and C1 twins. Most of the twin boundaries of E2 acted as the nucleate sites of E1. The densities of twins increase remarkably with increment of strains. For samples compressed at relatively higher temperatures, the migration of twin boundaries of E1, E2 and C1 was observed. All the twin lamellas shorten with temperature, and nearly disappeared at 873K except some remaining E1 twins. Polygonizations of grain boundaries were observed above 573K. The microstructure intended to have a texture with c-axes parallel to compression direction with temperature increment. Factors affecting the dynamic recovery and re-crystallization were discussed.Keywords: deformation twins, EBSD, mechanical behavior, high strain rate, titanium
Procedia PDF Downloads 260605 The Influence of Forest Management Histories on Dead and Habitat Trees in the Old Growth Forest in Northern Iran
Authors: Kiomars Sefidi
Abstract:
Dead and habitat tree such as fallen logs, snags, stumps and cracks and loos bark etc. is regarded as an important ecological component of forests on which many forest dwelling species depend, yet its relation to management history in Caspian forest has gone unreported. The aim of research was to compare the amounts of dead tree and habitat in the forests with historically different intensities of management, including: forests with the long term implication of management (PS), the short-term implication of management (NS) which were compared with semi virgin forest (GS). The number of 405 individual dead and habitat trees were recorded and measured at 109 sampling locations. ANOVA revealed volume of the dead tree in the form and decay classes significantly differ within sites and dead volume in the semi virgin forest significantly higher than managed sites. Comparing the amount of dead and habitat tree in three sites showed that dead tree volume related with management history and significantly differ in three study sites. Also, the numbers of habitat trees including cavities, Cracks and loose bark and Fork split trees significantly vary among sites. Reaching their highest in virgin site and their lowest in the site with the long term implication of management, it was concluded that forest management cause reduction of the amount of dead and habitat tree. Forest management history affect the forest's ability to generate dead tree especially in a large size, thus managing this forest according to ecological sustainable principles require a commitment to maintaining stand structure that allow, continued generation of dead tree in a full range of size.Keywords: forest biodiversity, cracks trees, fork split trees, sustainable management, Fagus orientalis, Iran
Procedia PDF Downloads 553604 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 272603 Effects of Irrigation Scheduling and Soil Management on Maize (Zea mays L.) Yield in Guinea Savannah Zone of Nigeria
Authors: I. Alhassan, A. M. Saddiq, A. G. Gashua, K. K. Gwio-Kura
Abstract:
The main objective of any irrigation program is the development of an efficient water management system to sustain crop growth and development and avoid physiological water stress in the growing plants. Field experiment to evaluate the effects of some soil moisture conservation practices on yield and water use efficiency (WUE) of maize was carried out in three locations (i.e. Mubi and Yola in the northern Guinea Savannah and Ganye in the southern Guinea Savannah of Adamawa State, Nigeria) during the dry seasons of 2013 and 2014. The experiment consisted of three different irrigation levels (7, 10 and 12 day irrigation intervals), two levels of mulch (mulch and un-mulched) and two tillage practices (no tillage and minimum tillage) arranged in a randomized complete block design with split-split plot arrangement and replicated three times. The Blaney-Criddle method was used for measuring crop evapotranspiration. The results indicated that seven-day irrigation intervals and mulched treatment were found to have significant effect (P>0.05) on grain yield and water use efficiency in all the locations. The main effect of tillage was non-significant (P<0.05) on grain yield and WUE. The interaction effects of irrigation and mulch were significant (P>0.05) on grain yield and WUE at Mubi and Yola. Generally, higher grain yield and WUE were recorded on mulched and seven-day irrigation intervals, whereas lower values were recorded on un-mulched with 12-day irrigation intervals. Tillage exerts little influence on the yield and WUE. Results from Ganye were found to be generally higher than those recorded in Mubi and Yola; it also showed that an irrigation interval of 10 days with mulching could be adopted for the Ganye area, while seven days interval is more appropriate for Mubi and Yola.Keywords: irrigation, maize, mulching, tillage, savanna
Procedia PDF Downloads 214602 Identifying Coloring in Graphs with Twins
Authors: Souad Slimani, Sylvain Gravier, Simon Schmidt
Abstract:
Recently, several vertex identifying notions were introduced (identifying coloring, lid-coloring,...); these notions were inspired by identifying codes. All of them, as well as original identifying code, is based on separating two vertices according to some conditions on their closed neighborhood. Therefore, twins can not be identified. So most of known results focus on twin-free graph. Here, we show how twins can modify optimal value of vertex-identifying parameters for identifying coloring and locally identifying coloring.Keywords: identifying coloring, locally identifying coloring, twins, separating
Procedia PDF Downloads 147601 Comparative Review of Models for Forecasting Permanent Deformation in Unbound Granular Materials
Authors: Shamsulhaq Amin
Abstract:
Unbound granular materials (UGMs) are pivotal in ensuring long-term quality, especially in the layers under the surface of flexible pavements and other constructions. This study seeks to better understand the behavior of the UGMs by looking at popular models for predicting lasting deformation under various levels of stresses and load cycles. These models focus on variables such as the number of load cycles, stress levels, and features specific to materials and were evaluated on the basis of their ability to accurately predict outcomes. The study showed that these factors play a crucial role in how well the models work. Therefore, the research highlights the need to look at a wide range of stress situations to more accurately predict how much the UGMs bend or shift. The research looked at important factors, like how permanent deformation relates to the number of times a load is applied, how quickly this phenomenon happens, and the shakedown effect, in two different types of UGMs: granite and limestone. A detailed study was done over 100,000 load cycles, which provided deep insights into how these materials behave. In this study, a number of factors, such as the level of stress applied, the number of load cycles, the density of the material, and the moisture present were seen as the main factors affecting permanent deformation. It is vital to fully understand these elements for better designing pavements that last long and handle wear and tear. A series of laboratory tests were performed to evaluate the mechanical properties of materials and acquire model parameters. The testing included gradation tests, CBR tests, and Repeated load triaxial tests. The repeated load triaxial tests were crucial for studying the significant components that affect deformation. This test involved applying various stress levels to estimate model parameters. In addition, certain model parameters were established by regression analysis, and optimization was conducted to improve outcomes. Afterward, the material parameters that were acquired were used to construct graphs for each model. The graphs were subsequently compared to the outcomes obtained from the repeated load triaxial testing. Additionally, the models were evaluated to determine if they demonstrated the two inherent deformation behaviors of materials when subjected to repetitive load: the initial phase, post-compaction, and the second phase volumetric changes. In this study, using log-log graphs was key to making the complex data easier to understand. This method made the analysis clearer and helped make the findings easier to interpret, adding both precision and depth to the research. This research provides important insight into picking the right models for predicting how these materials will act under expected stress and load conditions. Moreover, it offers crucial information regarding the effect of load cycle and permanent deformation as well as the shakedown effect on granite and limestone UGMs.Keywords: permanent deformation, unbound granular materials, load cycles, stress level
Procedia PDF Downloads 38600 Evaluation of Properties of Alkali Activated Slag Concrete Blended with Polypropylene Shredding and Admixture
Authors: Jagannath Prasad Tegar, Zeeshan Ahmad
Abstract:
The Ordinary Portland Cement (OPC) is a major constituent of concrete, which is being used extensively since last half century. The production of cement is impacting not only environment alone, but depleting natural materials. During the past 3 decades, the scholars have carried out studies and researches to explore the supplementary cementatious materials such as Ground granulated Blast furnace slag (GGBFS), silica fumes (SF), metakaolin or fly ash (FA). This has contributed towards improved cementatious materials which are being used in construction, but not the way it is supposed to be. The alkali activated slag concrete is another innovation which has constituents of cementatious materials like Ground Granuled Blast Furnace Slag (GGBFS), Fly Ash (FA), Silica Fumes (SF) or Metakaolin. Alkaline activators like Sodium Silicate (Na₂SiO₃) and Sodium Hydroxide (NaOH) is utilized. In view of evaluating properties of alkali activated slag concrete blended with polypropylene shredding and accelerator, research study is being carried out. This research study is proposed to evaluate the effect of polypropylene shredding and accelerating admixture on mechanical properties of alkali-activated slag concrete. The mechanical properties include the compressive strength, splitting tensile strength and workability. The outcomes of this research are matched with the hypothesis and it is found that 27% of cement can be replaced with the ground granulated blast furnace slag (GGBFS) and for split tensile strength 20% replacement is achieved. Overall it is found that 20% of cement can be replaced with ground granulated blast furnace slag. The tests conducted in the laboratory for evaluating properties such as compressive strength test, split tensile strength test, and slump cone test. On the aspect of cost, it is substantially benefitted.Keywords: ordinary Portland cement, activated slag concrete, ground granule blast furnace slag, fly ash, silica fumes
Procedia PDF Downloads 176599 Modeling and Simulation of Practical Metamaterial Structures
Authors: Ridha Salhi, Mondher Labidi, Fethi Choubani
Abstract:
Metamaterials have attracted much attention in recent years because of their electromagnetic exquisite proprieties. We will present, in this paper, the modeling of three metamaterial structures by equivalent circuit model. We begin by modeling the SRR (Split Ring Resonator), then we model the HIS (High Impedance Surfaces), and finally, we present the model of the CPW (Coplanar Wave Guide). In order to validate models, we compare the results obtained by an equivalent circuit models with numerical simulation.Keywords: metamaterials, SRR, HIS, CPW, IDC
Procedia PDF Downloads 429598 Accessing Properties of Alkali Activated Ground Granulated Blast Furnace Slag Based Self Compacting Geopolymer Concrete Incorporating Nano Silica
Authors: Guneet Saini, Uthej Vattipalli
Abstract:
In a world with increased demand for sustainable construction, waste product of one industry could be a boon to the other in reducing the carbon footprint. Usage of industrial waste such as fly ash and ground granulated blast furnace slag have become the epicenter of curbing the use of cement, one of the major contributors of greenhouse gases. In this paper, empirical studies have been done to develop alkali activated self-compacting geopolymer concrete (GPC) using ground granulated blast furnace slag (GGBS), incorporated with 2% nano-silica by weight, through evaluation of its fresh and hardening properties. Experimental investigation on 6 mix designs of varying molarity of 10M, 12M and 16M of the alkaline solution and a binder content of 450 kg/m³ and 500 kg/m³ has been done and juxtaposed with GPC mix design composed of 16M alkaline solution concentration and 500 kg/m³ binder content without nano-silica. The sodium silicate to sodium hydroxide ratio (SS/SH), alkaline activator liquid to binder ratio (AAL/B) and water to binder ratio (W/B), which significantly affect the performance and mechanical properties of GPC, were fixed at 2.5, 0.45 and 0.4 respectively. To catalyze the early stage geopolymerisation, oven curing is done maintaining the temperature at 60˚C. This paper also elucidates the test results for fresh self-compacting concrete (SCC) done as per EFNARC guidelines. The mechanical properties tests conducted were: compressive strength test after 7 days, 28 days, 56 days and 90 days; flexure test; split tensile strength test after 28 days, 56 days and 90 days; X-ray diffraction test to analyze the mechanical performance and sorptivity test for testing of permeability. The study revealed that the sample of 16M concentration of alkaline solution with 500 Kg/m³ binder content containing 2% nano silica produced the highest compressive, flexural and split tensile strength of 81.33 MPa, 7.875 MPa, and 6.398 MPa respectively, at the end of 90 days.Keywords: alkaline activator liquid, geopolymer concrete, ground granulated blast furnace slag, nano silica, self compacting
Procedia PDF Downloads 147597 Pairwise Relative Primality of Integers and Independent Sets of Graphs
Authors: Jerry Hu
Abstract:
Let G = (V, E) with V = {1, 2, ..., k} be a graph, the k positive integers a₁, a₂, ..., ak are G-wise relatively prime if (aᵢ, aⱼ ) = 1 for {i, j} ∈ E. We use an inductive approach to give an asymptotic formula for the number of k-tuples of integers that are G-wise relatively prime. An exact formula is obtained for the probability that k positive integers are G-wise relatively prime. As a corollary, we also provide an exact formula for the probability that k positive integers have exactly r relatively prime pairs.Keywords: graph, independent set, G-wise relatively prime, probability
Procedia PDF Downloads 91596 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 326595 The Use of Network Theory in Heritage Cities
Authors: J. L. Oliver, T. Agryzkov, L. Tortosa, J. Vicent, J. Santacruz
Abstract:
This paper aims to demonstrate how the use of Network Theory can be applied to a very interesting and complex urban situation: The parts of a city which may have some patrimonial value, but because of their lack of relevant architectural elements, they are not considered to be historic in a conventional sense. In this paper, we use the suburb of La Villaflora in the city of Quito, Ecuador as our case study. We first propose a system of indicators as a tool to characterize and quantify the historic value of a geographic area. Then, we apply these indicators to the suburb of La Villaflora and use Network Theory to understand and propose actions.Keywords: graphs, mathematics, networks, urban studies
Procedia PDF Downloads 369594 A Survey on Constraint Solving Approaches Using Parallel Architectures
Authors: Nebras Gharbi, Itebeddine Ghorbel
Abstract:
In the latest years and with the advancements of the multicore computing world, the constraint programming community tried to benefit from the capacity of new machines and make the best use of them through several parallel schemes for constraint solving. In this paper, we propose a survey of the different proposed approaches to solve Constraint Satisfaction Problems using parallel architectures. These approaches use in a different way a parallel architecture: the problem itself could be solved differently by several solvers or could be split over solvers.Keywords: constraint programming, parallel programming, constraint satisfaction problem, speed-up
Procedia PDF Downloads 317593 A Graph Theoretic Algorithm for Bandwidth Improvement in Computer Networks
Authors: Mehmet Karaata
Abstract:
Given two distinct vertices (nodes) source s and target t of a graph G = (V, E), the two node-disjoint paths problem is to identify two node-disjoint paths between s ∈ V and t ∈ V . Two paths are node-disjoint if they have no common intermediate vertices. In this paper, we present an algorithm with O(m)-time complexity for finding two node-disjoint paths between s and t in arbitrary graphs where m is the number of edges. The proposed algorithm has a wide range of applications in ensuring reliability and security of sensor, mobile and fixed communication networks.Keywords: disjoint paths, distributed systems, fault-tolerance, network routing, security
Procedia PDF Downloads 441592 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter
Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai
Abstract:
Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking
Procedia PDF Downloads 482