Search results for: similarity calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1896

Search results for: similarity calculation

1596 Application of Bim Model Data to Estimate ROI for Robots and Automation in Construction Projects

Authors: Brian Romansky

Abstract:

There are many practical, commercially available robots and semi-autonomous systems that are currently available for use in a wide variety of construction tasks. Adoption of these technologies has the potential to reduce the time and cost to deliver a project, reduce variability and risk in delivery time, increase quality, and improve safety on the job site. These benefits come with a cost for equipment rental or contract fees, access to specialists to configure the system, and time needed for set-up and support of the machines while in use. Calculation of the net ROI (Return on Investment) requires detailed information about the geometry of the site, the volume of work to be done, the overall project schedule, as well as data on the capabilities and past performance of available robotic systems. Assembling the required data and comparing the ROI for several options is complex and tedious. Many project managers will only consider the use of a robot in targeted applications where the benefits are obvious, resulting in low levels of adoption of automation in the construction industry. This work demonstrates how data already resident in many BIM (Building Information Model) projects can be used to automate ROI estimation for a sample set of commercially available construction robots. Calculations account for set-up and operating time along with scheduling support tasks required while the automated technology is in use. Configuration parameters allow for prioritization of time, cost, or safety as the primary benefit of the technology. A path toward integration and use of automatic ROI calculation with a database of available robots in a BIM platform is described.

Keywords: automation, BIM, robot, ROI.

Procedia PDF Downloads 71
1595 Electronic Structure Calculation of AsSiTeB/SiAsBTe Nanostructures Using Density Functional Theory

Authors: Ankit Kargeti, Ravikant Shrivastav, Tabish Rasheed

Abstract:

The electronic structure calculation for the nanoclusters of AsSiTeB/SiAsBTe quaternary semiconductor alloy belonging to the III-V Group elements was performed. Motivation for this research work was to look for accurate electronic and geometric data of small nanoclusters of AsSiTeB/SiAsBTe in the gaseous form. The two clusters, one in the linear form and the other in the bent form, were studied under the framework of Density Functional Theory (DFT) using the B3LYP functional and LANL2DZ basis set with the software packaged Gaussian 16. We have discussed the Optimized Energy, Frontier Orbital Energy Gap in terms of HOMO-LUMO, Dipole Moment, Ionization Potential, Electron Affinity, Binding Energy, Embedding Energy, Density of States (DoS) spectrum for both structures. The important findings of the predicted nanostructures are that these structures have wide band gap energy, where linear structure has band gap energy (Eg) value is 2.375 eV and bent structure (Eg) value is 2.778 eV. Therefore, these structures can be utilized as wide band gap semiconductors. These structures have high electron affinity value of 4.259 eV for the linear structure and electron affinity value of 3.387 eV for the bent structure form. It shows that electron acceptor capability is high for both forms. The widely known application of these compounds is in the light emitting diodes due to their wide band gap nature.

Keywords: density functional theory, DFT, density functional theory, nanostructures, HOMO-LUMO, density of states

Procedia PDF Downloads 97
1594 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation

Authors: Jessica Cecchinelli, Amer Ali

Abstract:

The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.

Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction

Procedia PDF Downloads 410
1593 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: probabilistic methods, risk assessment, risk management, slope stability

Procedia PDF Downloads 370
1592 Ground Deformation Module for the New Laboratory Methods

Authors: O. Giorgishvili

Abstract:

For calculation of foundations one of the important characteristics is the module of deformation (E0). As we all know, the main goal of calculation of the foundations of buildings on deformation is to arrange the base settling and difference in settlings in such limits that do not cause origination of cracks and changes in design levels that will be dangerous to standard operation in the buildings and their individual structures. As is known from the literature and the practical application, the modulus of deformation is determined by two basic methods: laboratory method, soil test on compression (without the side widening) and soil test in field conditions. As we know, the deformation modulus of soil determined by field method is closer to the actual modulus deformation of soil, but the complexity of the tests to be carried out and the financial concerns did not allow determination of ground deformation modulus by field method. Therefore, we determine the ground modulus of deformation by compression method without side widening. Concerning this, we introduce a new way for determination of ground modulus of deformation by laboratory order that occurs by side widening and more accurately reflects the ground modulus of deformation and more accurately reflects the actual modulus of deformation and closer to the modulus of deformation determined by the field method. In this regard, we bring a new approach on the ground deformation detection laboratory module, which is done by widening sides. The tests and the results showed that the proposed method of ground deformation modulus is closer to the results that are obtained in the field, which reflects the foundation's work in real terms more accurately than the compression of the ground deformation module.

Keywords: build, deformation modulus, foundations, ground, laboratory research

Procedia PDF Downloads 354
1591 A Hybrid Watermarking Scheme Using Discrete and Discrete Stationary Wavelet Transformation For Color Images

Authors: Bülent Kantar, Numan Ünaldı

Abstract:

This paper presents a new method which includes robust and invisible digital watermarking on images that is colored. Colored images are used as watermark. Frequency region is used for digital watermarking. Discrete wavelet transform and discrete stationary wavelet transform are used for frequency region transformation. Low, medium and high frequency coefficients are obtained by applying the two-level discrete wavelet transform to the original image. Low frequency coefficients are obtained by applying one level discrete stationary wavelet transform separately to all frequency coefficient of the two-level discrete wavelet transformation of the original image. For every low frequency coefficient obtained from one level discrete stationary wavelet transformation, watermarks are added. Watermarks are added to all frequency coefficients of two-level discrete wavelet transform. Totally, four watermarks are added to original image. In order to get back the watermark, the original and watermarked images are applied with two-level discrete wavelet transform and one level discrete stationary wavelet transform. The watermark is obtained from difference of the discrete stationary wavelet transform of the low frequency coefficients. A total of four watermarks are obtained from all frequency of two-level discrete wavelet transform. Obtained watermark results are compared with real watermark results, and a similarity result is obtained. A watermark is obtained from the highest similarity values. Proposed methods of watermarking are tested against attacks of the geometric and image processing. The results show that proposed watermarking method is robust and invisible. All features of frequencies of two level discrete wavelet transform watermarking are combined to get back the watermark from the watermarked image. Watermarks have been added to the image by converting the binary image. These operations provide us with better results in getting back the watermark from watermarked image by attacking of the geometric and image processing.

Keywords: watermarking, DWT, DSWT, copy right protection, RGB

Procedia PDF Downloads 514
1590 Evolution of Approaches to Cost Calculation in the Conditions of the Modern Russian Economy

Authors: Elena Tkachenko, Vladimir Kokh, Alina Osipenko, Vladislav Surkov

Abstract:

The modern period of development of Russian economy is fraught with a number of problems related to limitations in the use of traditional planning and financial management tools. Restrictions in the use of foreign software when performing an order of the Russian Government, on the one hand, and sanctions limiting the support of the major ERP and MRP II systems in the Russian Federation, on the other hand, entail the necessity to appeal to the basics of developing budgeting and analysis systems for industrial enterprises. Thus, cost calculation theory becomes the theoretical foundation for the development of industrial cost management systems. Based on the foregoing, it would be fair to make an assumption that the development of a working managerial accounting model on an industrial enterprise using an automated enterprise resource management system should rest upon the concept of the inevitability of alterations of business processes. On the other hand, optimized business processes make the architecture of financial analytics more transparent and permit the use of all the benefits of data cubes. The metrics and indicator slices provide online assessment of the state of key business processes at a given moment of time, which improves the quality of managerial decisions considerably. Therefore, the bilateral sanctions situation boosted the development of corporate business analytics and took industrial companies to the next level of understanding of business processes.

Keywords: cost culculation, ERP, OLAP, modern Russian economy

Procedia PDF Downloads 203
1589 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 561
1588 Different Views and Evaluations of IT Artifacts

Authors: Sameh Al-Natour, Izak Benbasat

Abstract:

The introduction of a multitude of new and interactive e-commerce information technology (IT) artifacts has impacted adoption research. Rather than solely functioning as productivity tools, new IT artifacts assume the roles of interaction mediators and social actors. This paper describes the varying roles assumed by IT artifacts, and proposes and distinguishes between four distinct foci of how the artifacts are evaluated. It further proposes a theoretical model that maps the different views of IT artifacts to four distinct types of evaluations.

Keywords: IT adoption, IT artifacts, similarity, social actor

Procedia PDF Downloads 374
1587 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 126
1586 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport

Procedia PDF Downloads 424
1585 Identification of Analogues to EGCG for the Inhibition of HPV E7: A Fundamental Insights through Structural Dynamics Study

Authors: Murali Aarthy, Sanjeev Kumar Singh

Abstract:

High risk human papillomaviruses are highly associated with the carcinoma of the cervix and the other genital tumors. Cervical cancer develops through the multistep process in which increasingly severe premalignant dysplastic lesions called cervical intraepithelial neoplastic progress to invasive cancer. The oncoprotein E7 of human papillomavirus expressed in the lower epithelial layers drives the cells into S-phase creating an environment conducive for viral genome replication and cell proliferation. The replication of the virus occurs in the terminally differentiating epithelium and requires the activation of cellular DNA replication proteins. To date, no suitable drug molecule is available to treat HPV infection whereas identification of potential drug targets and development of novel anti-HPV chemotherapies with unique mode of actions are expected. Hence, our present study aimed to identify the potential inhibitors analogous to EGCG, a green tea molecule which is considered to be safe to use for mammalian systems. A 3D similarity search on the natural small molecule library from natural product database using EGCG identified 11 potential hits based on their similarity score. The structure based docking strategies were implemented in the potential hits and the key interacting residues of protein with compounds were identified through simulation studies and binding free energy calculations. The conformational changes between the apoprotein and the complex were analyzed with the simulation and the results demonstrated that the dynamical and structural effects observed in the protein were induced by the compounds and indicated the dominance to the oncoprotein. Overall, our study provides the basis for the structural insights of the identified potential hits and EGCG and hence, the analogous compounds identified can be potent inhibitors against the HPV 16 E7 oncoprotein.

Keywords: EGCG, oncoprotein, molecular dynamics simulation, analogues

Procedia PDF Downloads 108
1584 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles

Procedia PDF Downloads 90
1583 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE

Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao

Abstract:

For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.

Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE

Procedia PDF Downloads 160
1582 Reducing Component Stress during Encapsulation of Electronics: A Simulative Examination of Thermoplastic Foam Injection Molding

Authors: Constantin Ott, Dietmar Drummer

Abstract:

The direct encapsulation of electronic components is an effective way of protecting components against external influences. In addition to achieving a sufficient protective effect, there are two other big challenges for satisfying the increasing demand for encapsulated circuit boards. The encapsulation process should be both suitable for mass production and offer a low component load. Injection molding is a method with good suitability for large series production but also with typically high component stress. In this article, two aims were pursued: first, the development of a calculation model that allows an estimation of the occurring forces based on process variables and material parameters. Second, the evaluation of a new approach for stress reduction by means of thermoplastic foam injection molding. For this purpose, simulation-based process data was generated with the Moldflow simulation tool. Based on this, component stresses were calculated with the calculation model. At the same time, this paper provided a model for estimating the forces occurring during overmolding and derived a solution method for reducing these forces. The suitability of this approach was clearly demonstrated and a significant reduction in shear forces during overmolding was achieved. It was possible to demonstrate a process development that makes it possible to meet the two main requirements of direct encapsulation in addition to a high protective effect.

Keywords: encapsulation, stress reduction, foam-injection-molding, simulation

Procedia PDF Downloads 113
1581 Efficiency and Reliability Analysis of SiC-Based and Si-Based DC-DC Buck Converters in Thin-Film PV Systems

Authors: Elaid Bouchetob, Bouchra Nadji

Abstract:

This research paper compares the efficiency and reliability (R(t)) of SiC-based and Si-based DC-DC buck converters in thin layer PV systems with an AI-based MPPT controller. Using Simplorer/Simulink simulations, the study assesses their performance under varying conditions. Results show that the SiC-based converter outperforms the Si-based one in efficiency and cost-effectiveness, especially in high temperature and low irradiance conditions. It also exhibits superior reliability, particularly at high temperature and voltage. Reliability calculation (R(t)) is analyzed to assess system performance over time. The SiC-based converter demonstrates better reliability, considering factors like component failure rates and system lifetime. The research focuses on the buck converter's role in charging a Lithium battery within the PV system. By combining the SiC-based converter and AI-based MPPT controller, higher charging efficiency, improved reliability, and cost-effectiveness are achieved. The SiC-based converter proves superior under challenging conditions, emphasizing its potential for optimizing PV system charging. These findings contribute insights into the efficiency, reliability, and reliability calculation of SiC-based and Si-based converters in PV systems. SiC technology's advantages, coupled with advanced control strategies, promote efficient and sustainable energy storage using Lithium batteries. The research supports PV system design and optimization for reliable renewable energy utilization.

Keywords: efficiency, reliability, artificial intelligence, sic device, thin layer, buck converter

Procedia PDF Downloads 43
1580 Brown-Spot Needle Blight: An Emerging Threat Causing Loblolly Pine Needle Defoliation in Alabama, USA

Authors: Debit Datta, Jeffrey J. Coleman, Scott A. Enebak, Lori G. Eckhardt

Abstract:

Loblolly pine (Pinus taeda) is a leading productive timber species in the southeastern USA. Over the past three years, an emerging threat is expressed by successive needle defoliation followed by stunted growth and tree mortality in loblolly pine plantations. Considering economic significance, it has now become a rising concern among landowners, forest managers, and forest health state cooperators. However, the symptoms of the disease were perplexed somewhat with root disease(s) and recurrently attributed to invasive Phytophthora species due to the similarity of disease nature and devastation. Therefore, the study investigated the potential causal agent of this disease and characterized the fungi associated with loblolly pine needle defoliation in the southeastern USA. Besides, 70 trees were selected at seven long-term monitoring plots at Chatom, Alabama, to monitor and record the annual disease incidence and severity. Based on colony morphology and ITS-rDNA sequence data, a total of 28 species of fungi representing 17 families have been recovered from diseased loblolly pine needles. The native brown-spot pathogen, Lecanosticta acicola, was the species most frequently recovered from unhealthy loblolly pine needles in combination with some other common needle cast and rust pathogen(s). Identification was confirmed using morphological similarity and amplification of translation elongation factor 1-alpha gene region of interest. Tagged trees were consistently found chlorotic and defoliated from 2019 to 2020. The current emergence of the brown-spot pathogen causing loblolly pine mortality necessitates the investigation of the role of changing climatic conditions, which might be associated with increased pathogen pressure to loblolly pines in the southeastern USA.

Keywords: brown-spot needle blight, loblolly pine, needle defoliation, plantation forestry

Procedia PDF Downloads 136
1579 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 54
1578 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators

Authors: Nur Aziza Luxfiati

Abstract:

Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.

Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development

Procedia PDF Downloads 145
1577 A Sectional Control Method to Decrease the Accumulated Survey Error of Tunnel Installation Control Network

Authors: Yinggang Guo, Zongchun Li

Abstract:

In order to decrease the accumulated survey error of tunnel installation control network of particle accelerator, a sectional control method is proposed. Firstly, the accumulation rule of positional error with the length of the control network is obtained by simulation calculation according to the shape of the tunnel installation-control-network. Then, the RMS of horizontal positional precision of tunnel backbone control network is taken as the threshold. When the accumulated error is bigger than the threshold, the tunnel installation control network should be divided into subsections reasonably. On each segment, the middle survey station is taken as the datum for independent adjustment calculation. Finally, by taking the backbone control points as faint datums, the weighted partial parameters adjustment is performed with the adjustment results of each segment and the coordinates of backbone control points. The subsections are jointed and unified into the global coordinate system in the adjustment process. An installation control network of the linac with a length of 1.6 km is simulated. The RMS of positional deviation of the proposed method is 2.583 mm, and the RMS of the difference of positional deviation between adjacent points reaches 0.035 mm. Experimental results show that the proposed sectional control method can not only effectively decrease the accumulated survey error but also guarantee the relative positional precision of the installation control network. So it can be applied in the data processing of tunnel installation control networks, especially for large particle accelerators.

Keywords: alignment, tunnel installation control network, accumulated survey error, sectional control method, datum

Procedia PDF Downloads 171
1576 On q-Non-extensive Statistics with Non-Tsallisian Entropy

Authors: Petr Jizba, Jan Korbel

Abstract:

We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding ‘high’ and ‘low-temperature’ asymptotics and reveal a non-trivial structure of the parameter space.

Keywords: multifractals, Rényi information entropy, THC entropy, MaxEnt, heavy-tailed distributions

Procedia PDF Downloads 427
1575 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease

Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah

Abstract:

Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.

Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor

Procedia PDF Downloads 225
1574 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: data mining, digital libraries, digital preservation, file format

Procedia PDF Downloads 482
1573 Influences of Slope Inclination on the Storage Capacity and Stability of Municipal Solid Waste Landfills

Authors: Feten Chihi, Gabriella Varga

Abstract:

The world's most prevalent waste management strategy is landfills. However, it grew more difficult due to a lack of acceptable waste sites. In order to develop larger landfills and extend their lifespan, the purpose of this article is to expand the capacity of the construction by varying the slope's inclination and to examine its effect on the safety factor. The capacity change with tilt is mathematically determined. Using a new probabilistic calculation method that takes into account the heterogeneity of waste layers, the safety factor for various slope angles is examined. To assess the effect of slope variation on the overall safety of landfills, over a hundred computations were performed for each angle. It has been shown that capacity increases significantly with increasing inclination. Passing from 1:3 to 2:3 slope angles and from 1:3 to 1:2 slope angles, the volume of garbage that can be deposited increases by 40 percent and 25 percent, respectively, of the initial volume. The results of the safety factor indicate that slopes of 1:3 and 1:2 are safe when the standard method (homogenous waste) is used for computation. Using the new approaches, a slope with an inclination of 2:3 can be deemed safe, despite the fact that the calculation does not account for the safety-enhancing effect of daily cover layers. Based on the study reported in this paper, the malty layered nonhomogeneous calculating technique better characterizes the safety factor. As it more closely resembles the actual state of landfills, the employed technique allows for more flexibility in design parameters. This work represents a substantial advance in limiting both safe and economical landfills.

Keywords: landfill, municipal solid waste, slope inclination, capacity, safety factor

Procedia PDF Downloads 174
1572 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track

Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink

Abstract:

The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.

Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges

Procedia PDF Downloads 152
1571 Heat Transfer of an Impinging Jet on a Plane Surface

Authors: Jian-Jun Shu

Abstract:

A cold, thin film of liquid impinging on an isothermal hot, horizontal surface has been investigated. An approximate solution for the velocity and temperature distributions in the flow along the horizontal surface is developed, which exploits the hydrodynamic similarity solution for thin film flow. The approximate solution may provide a valuable basis for assessing flow and heat transfer in more complex settings.

Keywords: flux, free impinging jet, solid-surface, uniform wall temperature

Procedia PDF Downloads 461
1570 Insights Into Serotonin-Receptor Binding and Stability via Molecular Dynamics Simulations: Key Residues for Electrostatic Interactions and Signal Transduction

Authors: Arunima Verma, Padmabati Mondal

Abstract:

Serotonin-receptor binding plays a key role in several neurological and biological processes, including mood, sleep, hunger, cognition, learning, and memory. In this article, we performed molecular dynamics simulation to examine the key residues that play an essential role in the binding of serotonin to the G-protein-coupled 5-HT₁ᴮ receptor (5-HT₁ᴮ R) via electrostatic interactions. An end-point free energy calculation method (MM-PBSA) determines the stability of the 5-HT1B R due to serotonin binding. The single-point mutation of the polar or charged amino acid residues (Asp129, Thr134) on the binding sites and the calculation of binding free energy validate the importance of these residues in the stability of the serotonin-receptor complex. Principal component analysis indicates the serotonin-bound 5-HT1BR is more stabilized than the apo-receptor in terms of dynamical changes. The difference dynamic cross-correlations map shows the correlation between the transmembrane and mini-Go, which indicates signal transduction happening between mini-Go and the receptor. Allosteric communication reveals the key nodes for signal transduction in 5-HT1BR. These results provide useful insights into the signal transduction pathways and mutagenesis study to regulate the functionality of the complex. The developed protocols can be applied to study local non-covalent interactions and long-range allosteric communications in any protein-ligand system for computer-aided drug design.

Keywords: allostery, CADD, MD simulations, MM-PBSA

Procedia PDF Downloads 65
1569 Interaction Evaluation of Silver Ion and Silver Nanoparticles with Dithizone Complexes Using DFT Calculations and NMR Analysis

Authors: W. Nootcharin, S. Sujittra, K. Mayuso, K. Kornphimol, M. Rawiwan

Abstract:

Silver has distinct antibacterial properties and has been used as a component of commercial products with many applications. An increasing number of commercial products cause risks of silver effects for human and environment such as the symptoms of Argyria and the release of silver to the environment. Therefore, the detection of silver in the aquatic environment is important. The colorimetric chemosensor is designed by the basic of ligand interactions with a metal ion, leading to the change of signals for the naked-eyes which are very useful method to this application. Dithizone ligand is considered as one of the effective chelating reagents for metal ions due to its high selectivity and sensitivity of a photochromic reaction for silver as well as the linear backbone of dithizone affords the rotation of various isomeric forms. The present study is focused on the conformation and interaction of silver ion and silver nanoparticles (AgNPs) with dithizone using density functional theory (DFT). The interaction parameters were determined in term of binding energy of complexes and the geometry optimization, frequency of the structures and calculation of binding energies using density functional approaches B3LYP and the 6-31G(d,p) basis set. Moreover, the interaction of silver–dithizone complexes was supported by UV–Vis spectroscopy, FT-IR spectrum that was simulated by using B3LYP/6-31G(d,p) and 1H NMR spectra calculation using B3LYP/6-311+G(2d,p) method compared with the experimental data. The results showed the ion exchange interaction between hydrogen of dithizone and silver atom, with minimized binding energies of silver–dithizone interaction. However, the result of AgNPs in the form of complexes with dithizone. Moreover, the AgNPs-dithizone complexes were confirmed by using transmission electron microscope (TEM). Therefore, the results can be the useful information for determination of complex interaction using the analysis of computer simulations.

Keywords: silver nanoparticles, dithizone, DFT, NMR

Procedia PDF Downloads 193
1568 Improving the Global Competitiveness of SMEs by Logistics Transportation Management: Case Study Chicken Meat Supply Chain

Authors: P. Vanichkobchinda

Abstract:

The Logistics Transportation techniques, Open Vehicle Routing (OVR) is an approach toward transportation cost reduction, especially for long distance pickup and delivery nodes. The outstanding characteristic of OVR is that the route starting node and ending node are not necessary the same as in typical vehicle routing problems. This advantage enables the routing to flow continuously and the vehicle does not always return to its home base. This research aims to develop a heuristic for the open vehicle routing problem with pickup and delivery under time window and loading capacity constraints to minimize the total distance. The proposed heuristic is developed based on the Insertion method, which is a simple method and suitable for the rapid calculation that allows insertion of the new additional transportation requirements along the original paths. According to the heuristic analysis, cost comparisons between the proposed heuristic and companies are using method, nearest neighbor method show that the insertion heuristic. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing. The research indicates that the improvement of new transport's calculation and the open vehicle routing with "Insertion Heuristic" represent a better outcome with 34.3 percent in average. in cost savings. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing.

Keywords: business competitiveness, cost reduction, SMEs, logistics transportation, VRP

Procedia PDF Downloads 671
1567 Thomas Kuhn, the Accidental Theologian: An Argument for the Similarity of Science and Religion

Authors: Dominic McGann

Abstract:

Applying Kuhn’s model of paradigm shifts in science to cases of doctrinal change in religion has been a common area of study in recent years. Few authors, however, have sought an explanation for the ease with which this model of theory change in science can be applied to cases of religious change. In order to provide such an explanation of this analytic phenomenon, this paper aims to answer one central question: Why is it that a theory that was intended to be used in an analysis of the history of science can be applied to something as disparate as the doctrinal history of religion with little to no modification? By way of answering this question, this paper begins with an explanation of Kuhn’s model and its applications in the field of religious studies. Following this, Massa’s recently proposed explanation for this phenomenon, and its notable flaws will be explained by way of framing the central proposal of this article, that the operative parts of scientific and religious changes function on the same fundamental concept of changes in understanding. Focusing its argument on this key concept, this paper seeks to illustrate its operation in cases of religious conversion and in Kuhn’s notion of the incommensurability of different scientific paradigms. The conjecture of this paper is that just as a Pagan-turned-Christian ceases to hear Thor’s hammer when they hear a clap of thunder, so too does a Ptolemaic-turned-Copernican-astronomer cease to see the Sun orbiting the Earth when they view a sunrise. In both cases, the agent in question has undergone a similar change in universal understanding, which provides us with a fundamental connection between changes in religion and changes in science. Following an exploration of this connection, this paper will consider the implications that such a connection has for the concept of the division between religion and science. This will, in turn, lead to the conclusion that religion and science are more alike than they are opposed with regards to the fundamental notion of understanding, thereby providing an answer to our central question. The major finding of this paper is that Kuhn’s model can be applied to religious cases so easily because changes in science and changes in religion operate on the same type of change in understanding. Therefore, in summary, science and religion share a crucial similarity and are not as disparate as they first appear.

Keywords: Thomas Kuhn, science and religion, paradigm shifts, incommensurability, insight and understanding, philosophy of science, philosophy of religion

Procedia PDF Downloads 145