Search results for: Approaches
104 Spectral Mixture Model Applied to Cannabis Parcel Determination
Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.
Keywords: Gaussian mixture discriminant analysis, spectral mixture model, World View-2, land parcels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800103 A Probabilistic Reinforcement-Based Approach to Conceptualization
Authors: Hadi Firouzi, Majid Nili Ahmadabadi, Babak N. Araabi
Abstract:
Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.
Keywords: Concept learning, probabilistic decision making, reinforcement learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527102 Architectural Approaches to a Sustainable Community with Floating Housing Units Adapting to Climate Change and Sea Level Rise in Vietnam
Authors: Nguyen Thi Thu Trang
Abstract:
Climate change and sea level rise is one of the greatest challenges facing human beings in the 21st century. Because of sea level rise, several low-lying coastal areas around the globe are at risk of being completely submerged, disappearing under water. Particularly in Viet Nam, the rise in sea level is predicted to result in more frequent and even permanently inundated coastal plains. As a result, land reserving fund of coastal cities is going to be narrowed in near future, while construction ground is becoming increasingly limited due to a rapid growth in population. Faced with this reality, the solutions are being discussed not only in tradition view such as accommodation is raised or moved to higher areas, or “living with the water”, but also forwards to “living on the water”. Therefore, the concept of a sustainable floating community with floating houses based on the precious value of long term historical tradition of water dwellings in Viet Nam would be a sustainable solution for adaptation of climate change and sea level rise in the coastal areas. The sustainable floating community is comprised of sustainability in four components: architecture, environment, socio-economic and living quality. This research paper is focused on sustainability in architectural component of floating community. Through detailed architectural analysis of current floating houses and floating communities in Viet Nam, this research not only accumulates precious values of traditional architecture that need to be preserved and developed in the proposed concept, but also illustrates its weaknesses that need to address for optimal design of the future sustainable floating communities. Based on these studies the research would provide guidelines with appropriate architectural solutions for the concept of sustainable floating community with floating housing units that are adapted to climate change and sea level rise in Viet Nam.
Keywords: Climate change, floating houses, floating community, Viet Nam.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3281101 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430100 Parameters Influencing Human-Machine Interaction in Hospitals
Authors: Hind Bouami, Patrick Millot
Abstract:
Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedback helps identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled.
Keywords: Life-critical systems, situation awareness, human-machine interaction, decision-making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57599 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81198 Comparison of Composite Programming and Compromise Programming for Aircraft Selection Problem Using Multiple Criteria Decision Making Analysis Method
Authors: C. Ardil
Abstract:
In this paper, the comparison of composite programming and compromise programming for the aircraft selection problem is discussed using the multiple criteria decision analysis method. The decision making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as aircraft selection problems. The proposed technique gives more efficient results by extending the composite programming and compromise programming, which are widely used in modeling multiple criteria decisions. The proposed model is applied to a practical decision problem for evaluating and selecting aircraft problems.A selection of aircraft was made based on the proposed approach developed in the field of multiple criteria decision making. The model presented is solved by using the following methods: composite programming, and compromise programming. The importance values of the weight coefficients of the criteria are calculated using the mean weight method. The evaluation and ranking of aircraft are carried out using the composite programming and compromise programming methods. In order to determine the stability of the model and the ability to apply the developed composite programming and compromise programming approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ and q in the first part. The second part of the sensitivity analysis relates to the application of different multiple criteria decision making methods, composite programming and compromise programming. In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient of the ranks obtained was calculated which confirms the applicability of all the proposed approaches.
Keywords: composite programming, compromise programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69597 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.
Keywords: Modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90896 Synthesis of PVA/γ-Fe2O3 Used in Cancer Treatment by Hyperthermia
Authors: Sajjad Seifi Mofarah, S. K. Sadrnezhaad, Shokooh Moghadam, Javad Tavakoli
Abstract:
In recent years a new method of combination treatment for cancer has been developed and studied that has led to significant advancements in the field of cancer therapy. Hyperthermia is a traditional therapy that, along with a creation of a medically approved level of heat with the help of an alternating magnetic AC current, results in the destruction of cancer cells by heat. This paper gives details regarding the production of the spherical nanocomposite PVA/γ-Fe2O3 in order to be used for medical purposes such as tumor treatment by hyperthermia. To reach a suitable and evenly distributed temperature, the nanocomposite with core-shell morphology and spherical form within a 100 to 200 nanometer size was created using phase separation emulsion, in which the magnetic nano-particles γ- Fe2O3 with an average particle size of 20 nano-meters and with different percentages of 0.2, 0.4, 0.5 and 0.6 were covered by polyvinyl alcohol. The main concern in hyperthermia and heat treatment is achieving desirable specific absorption rate (SAR) and one of the most critical factors in SAR is particle size. In this project all attempts has been done to reach minimal size and consequently maximum SAR. The morphological analysis of the spherical structure of the nanocomposite PVA/γ-Fe2O3 was achieved by SEM analyses and the study of the chemical bonds created was made possible by FTIR analysis. To investigate the manner of magnetic nanocomposite particle size distribution a DLS experiment was conducted. Moreover, to determine the magnetic behavior of the γ- Fe2O3 particle and the nanocomposite PVA/γ-Fe2O3 in different concentrations a VSM test was conducted. To sum up, creating magnetic nanocomposites with a spherical morphology that would be employed for drug loading opens doors to new approaches in developing nanocomposites that provide efficient heat and a controlled release of drug simultaneously inside the magnetic field, which are among their positive characteristics that could significantly improve the recovery process in patients.
Keywords: Nanocomposite, hyperthermia, cancer therapy, drug release.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474595 A Theory-Based Analysis on Implications of Democracy in Cambodia
Authors: Puthsodary Tat
Abstract:
Democracy has been categorially accepted and used as foreign and domestic policy agendas for the hope of peace, economic growth and prosperity for more than 25 years in Cambodia. However, the country is now in the grip of dictatorship, human rights violations, and prospective economic sanctions. This paper examines different perceptions and experiences of democratic assistance. In this study, the author employs discourse theory, idealism and realism as a theory-based methodology for debating and assessing the implications of democratization. Discourse theory is used to establish a platform for understanding discursive formations, body of knowledge and the games of truth of democracy. Idealist approaches give rational arguments for adopting key tenets that work well on the ground. In contrast, realism allows for some sweeping critiques of utopian ideal and offers particular views on why Western hegemonic missions do not work well. From idealist views, the research finds that Cambodian people still believe that democracy is a prima facie universality for peace, growth and prosperity. From realism, democratization is on the brink of death in three reasons. Firstly, there are tensions between Western and local discourses about democratic values and norms. Secondly, democratic tenets have been undermined by the ruling party-controlled courts, corruption, structural oppression and political patronage-based institutions. The third pitfall is partly associated with foreign aid dependency and geopolitical power struggles in the region. Finally, the study offers a precise mosaic of democratic principles that may be used to avoid a future geopolitical and economic crisis.
Keywords: Corruption, democracy, democratic principles, discourse theory, discursive formations, foreign aid dependency, games of truth, geopolitical and economic crisis, geopolitical power struggle, hegemonic mission, idealism, realism, utopian ideal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98994 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions
Authors: Dhananjay C. Joshi, Jung-Hsin Lin
Abstract:
Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.
Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180593 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: Image processing, Illumination equalization, Shadow filtering, Object detection, Colour models, Image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102092 Rolling Element Bearing Diagnosis by Improved Envelope Spectrum: Optimal Frequency Band Selection
Authors: Juan David Arango, Alejandro Restrepo-Martinez
Abstract:
The Rolling Element Bearing (REB) vibration diagnosis is worth of special interest by the variety of REB and the wide necessity of those elements in industrial applications. The presence of a localized fault in a REB gives rise to a vibrational response, characterized by the modulation of a carrier signal. Frequency content of carrier signal (Spectral Frequency –f) is mainly related to resonance frequencies of the REB. This carrier signal is modulated by another signal, governed by the periodicity of the fault impact (Cyclic Frequency –α). In this sense, REB fault vibration response gives rise to a second-order cyclostationary signal. Second order cyclostationary signals could be represented in a bi-spectral map, where Spectral Coherence –SCoh are plotted against f and α. The Improved Envelope Spectrum –IES, is a useful approach to execute REB fault diagnosis. IES could be applied by the integration of SCoh over a predefined bandwidth on the f axis. Approaches to select f-bandwidth have been recently exposed by the definition of a metric which intends to evaluate the magnitude of the IES at the fault characteristics frequencies. This metric is represented in a 1/3-binary tree as a function of the frequency bandwidth and centre. Based on this binary tree the optimal frequency band is selected. However, some advantages have been seen if the metric is changed, which in fact tends to dictate different optimal f-bandwidth and so improve the IES representation. This paper evaluates the behaviour of the IES from a different metric optimization. This metric is based on the sample correlation coefficient, detecting high peaks in the selected frequencies while penalizing high peaks in the neighbours of the selected frequencies. Prior results indicate an improvement on the signal-noise ratio (SNR) on around 86% of samples analysed, which belong to IMS database.
Keywords: Sample Correlation IESFOgram, cyclostationary analysis, improved envelope spectrum, IES, rolling element bearing diagnosis, spectral coherence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74291 Complex Network Approach to International Trade of Fossil Fuel
Authors: Semanur Soyyiğit Kaya, Ercan Eren
Abstract:
Energy has a prominent role for development of nations. Countries which have energy resources also have strategic power in the international trade of energy since it is essential for all stages of production in the economy. Thus, it is important for countries to analyze the weaknesses and strength of the system. On the other side, international trade is one of the fields that are analyzed as a complex network via network analysis. Complex network is one of the tools to analyze complex systems with heterogeneous agents and interaction between them. A complex network consists of nodes and the interactions between these nodes. Total properties which emerge as a result of these interactions are distinct from the sum of small parts (more or less) in complex systems. Thus, standard approaches to international trade are superficial to analyze these systems. Network analysis provides a new approach to analyze international trade as a network. In this network, countries constitute nodes and trade relations (export or import) constitute edges. It becomes possible to analyze international trade network in terms of high degree indicators which are specific to complex networks such as connectivity, clustering, assortativity/disassortativity, centrality, etc. In this analysis, international trade of crude oil and coal which are types of fossil fuel has been analyzed from 2005 to 2014 via network analysis. First, it has been analyzed in terms of some topological parameters such as density, transitivity, clustering etc. Afterwards, fitness to Pareto distribution has been analyzed via Kolmogorov-Smirnov test. Finally, weighted HITS algorithm has been applied to the data as a centrality measure to determine the real prominence of countries in these trade networks. Weighted HITS algorithm is a strong tool to analyze the network by ranking countries with regards to prominence of their trade partners. We have calculated both an export centrality and an import centrality by applying w-HITS algorithm to the data. As a result, impacts of the trading countries have been presented in terms of high-degree indicators.Keywords: Complex network approach, fossil fuel, international trade, network theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 238690 Spatial Planning and Tourism Development with Sustainability Model of the Territorial Tourist with Land Use Approach
Authors: Mehrangiz Rezaee, Zabih Charrahi
Abstract:
In the last decade, with increasing tourism destinations and tourism growth, we are witnessing the widespread impacts of tourism on the economy, environment and society. Tourism and its related economy are now undergoing a transformation and as one of the key pillars of business economics, it plays a vital role in the world economy. Activities related to tourism and providing services appropriate to it in an area, like many economic sectors, require the necessary context on its origin. Given the importance of tourism industry and tourism potentials of Yazd province in Iran, it is necessary to use a proper procedure for prioritizing different areas for proper and efficient planning. One of the most important goals of planning is foresight and creating balanced development in different geographical areas. This process requires an accurate study of the areas and potential and actual talents, as well as evaluation and understanding of the relationship between the indicators affecting the development of the region. At the global and regional level, the development of tourist resorts and the proper distribution of tourism destinations are needed to counter environmental impacts and risks. The main objective of this study is the sustainable development of suitable tourism areas. Given that tourism activities in different territorial areas require operational zoning, this study deals with the evaluation of territorial tourism using concepts such as land use, fitness and sustainable development. It is essential to understand the structure of tourism development and the spatial development of tourism using land use patterns, spatial planning and sustainable development. Tourism spatial planning implements different approaches. However, the development of tourism as well as the spatial development of tourism is complex, since tourist activities can be carried out in different areas with different purposes. Multipurpose areas have great important for tourism because it determines the flow of tourism. Therefore, in this paper, by studying the development and determination of tourism suitability that is related to spatial development, it is possible to plan tourism spatial development by developing a model that describes the characteristics of tourism. The results of this research determine the suitability of multi-functional territorial tourism development in line with spatial planning of tourism.
Keywords: Land use change, spatial planning, sustainability, territorial tourist, Yazd.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112689 Determination of Post-Failure Characteristic Behaviour of Rocks under Conventional Method Based on the Mechanism of Rock Deformation Process
Authors: Victor Abioye Akinbinu
Abstract:
This work is intended to study the post-failure characteristic behaviour of rocks and the techniques of controlling the post-failure regime based on the mechanism of rocks deformation process. It is impossible to determine the post-failure regime of rocks using conventional laboratory testing equipment. This is because most testing machines are soft and therefore no information can be obtained after the peak load. Stress-strain deformation tests were conducted using both conventional and unconventional method (i.e. the closed loop servo-controlled testing machine) in accordance to ISRM standard. Normalised pre-failure curves were constructed to show the stages in the deformation process. The first type contains the Class I and progress to Class II with low strength soft brittle rocks. The second type shows entirely Class II characteristic behaviour. The third type is extremely brittle under axial loading, resulted in explosive failure, so its class could not be determined. The difficulty in obtaining the post-failure curves increases as the total volumetric strain approaches a positive value. The author’s use of normalised pre-failure curves enables identification of additional type of deformation process with very brittle response under axial loading. Testing the third type without confinement could cause equipment damage. Identification of the deformation process with the rock classes using conventional test could guide the personnel conducting tests using closed-loop servo-controlled system, to avoid equipment damage when testing rocks with third type deformation process so that testing is performed safely. It has also improved our understanding on total specimen failure and brittleness of rocks (e.g. brittle for Class II and less brittle or ductile for Class I).
Keywords: Closed-loop servo-controlled system, conventional testing equipment, deformation process, post-failure, pre-failure normalised curves, rock classes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83788 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm
Authors: A. El Harraj, N. Raissouni
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.
Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208887 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network
Authors: Abdulaziz Alsadhan, Naveed Khan
Abstract:
In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.
Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 276686 Emotion Regulation: An Exploratory Cross-Sectional Study on the Change and Grow Therapeutic Model
Authors: Eduardo da Silva, Tânia Caetano, Jessica B. Lopes
Abstract:
Emotion dysregulation has been linked to psychopathology in general and, in particular, to substance abuse and other addiction-related disorders, such as eating disorders, impulsive disorder, and gambling. It has been proposed that a lessening of the difficulties in emotion regulation can have a significant positive impact on the treatment of these disorders. The present study explores the association between the progress in the Change & Grow® therapeutic model (5 stages of treatment), and the decrease in the difficulties related to emotion regulation. The Change & Grow® model has five stages of treatment according to the model’s five principles (Truth, Acceptance, Gratitude, Love and Responsibility) and incorporates different therapeutic approaches such as positive psychology, cognitive and behavioral therapy and third generation therapies. The main objective is to understand the impact of the presented therapeutic model on difficulties in emotion regulation in patients with addiction-related disorders. The exploratory study has a cross-sectional design. Participants were 44 (15 women and 29 men) Portuguese patients in the residential Villa Ramadas International Treatment Centre. The instrument used was the Portuguese version of the Difficulties in Emotion Regulation Scale (DERS), which measures six dimensions of emotion regulation (Strategies, Non-acceptance, Awareness, Impulse, Goals, and Clarity). The mean rank scores for both the DERS total score and the Impulse subscale showed statistically significant differences according to Stage of Treatment/Principles. Furthermore, Stage of Treatment/Principles held a negative correlation with the scores of the Non-acceptance and Impulse subscales, as well as the DERS total score. The results indicate that the Change & Grow® model seems to have an impact in lessening the patient’s difficulties in emotion regulation. The Impulse dimension suffered the greater impact, which supports the well-known relevance of impulse control, or related difficulties, in addiction-related disorders.
Keywords: Addiction, Change & Grow®, emotion regulation, psychopathology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132585 Crossover Memories and Code-Switching in the Narratives of Arabic-Hebrew and Hebrew-English Bilingual Adults in Israel
Authors: Amani Jaber-Awida
Abstract:
This study examines two bilingual phenomena in the narratives of Arabic Hebrew and Hebrew-English bilingual adults in Israel: CO memories and code-switching (CS). The study examined these phenomena in the context of autobiographical memory, using a cue word technique. Student experimenters held two sessions in the homes of the participants. In separate language sessions, the participant was asked to look first at each of 16 cue words and then to state a concrete memory. After stating the memory, participants reported whether their memories were in the same language of the experiment session or different. Memories were classified as ‘Crossovers’ (CO) or ‘Same Language’ (SL) according to participants' self-reports. Participants were also required to elaborate about the setting, interlocutors and other languages involved in the specific memory. Beyond replicating the procedure of cuing technique, one memory from a specific lifespan period was chosen per participant, and the participant was required to provide further details about it. For the more detailed memories, CS count was conducted. Both bilingual groups confirmed the Reminiscence Bump phenomenon, retrieving more memories in the 10-30 age period. CO memories prevailed in second language sessions (L2). Same language memories were more abundant in first language sessions (L1). Higher CS frequency was found in L2 sessions. Finally, as predicted, 'individual' CS was prevalent in L2 sessions, but 'community-based' CS was not higher in L1 sessions. The two bilingual measures in this study, crossovers, and CS came from different research traditions, the former from an experimental paradigm in the psychology of autobiographical memory based on self-reported judgments, the latter a behavioral measure from linguistics. This merger of approaches offers new insight into the field of bilingual autobiographical memory. In addition, the study attempted to shed light on the investigation of motivations for CS, beginning with Walters’ SPPL Model and concluding with a distinction between ‘community-based’ and individual motivations.
Keywords: Autobiographical memory, code-switching, crossover memories, reminiscence bump.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78784 Thermodynamic Evaluation of Coupling APR1400 with a Thermal Desalination Plant
Authors: M. Gomaa Abdoelatef, Robert M. Field, Lee, Yong-Kwan
Abstract:
Growing human population has placed increased demands on water supplies and spurred a heightened interest in desalination infrastructure. Key elements of the economics of desalination projects are thermal and electrical inputs. With growing concerns over use of fossil fuels to (indirectly) supply these inputs, coupling of desalination with nuclear power production represents a significant opportunity. Individually, nuclear and desalination technologies have a long history and are relatively mature. For desalination, Reverse Osmosis (RO) has the lowest energy inputs. However, the economically driven output quality of the water produced using RO, which uses only electrical inputs, is lower than the output water quality from thermal desalination plants. Therefore, modern desalination projects consider that RO should be coupled with thermal desalination technologies (MSF, MED, or MED-TVC) with attendant steam inputs to permit blending to produce various qualities of water. A large nuclear facility is well positioned to dispatch large quantities of both electrical and thermal power. This paper considers the supply of thermal energy to a large desalination facility to examine heat balance impact on the nuclear steam cycle. The APR1400 nuclear plant is selected as prototypical from both a capacity and turbine cycle heat balance perspective to examine steam supply and the impact on electrical output. Extraction points and quantities of steam are considered parametrically along with various types of thermal desalination technologies to form the basis for further evaluations of economically optimal approaches to the interface of nuclear power production with desalination projects. In our study, the thermodynamic evaluation will be executed by DE-TOP, an IAEA sponsored program. DE-TOP has capabilities to analyze power generation systems coupled to desalination plants through various steam extraction positions, taking into consideration the isolation loop between the nuclear and the thermal desalination facilities (i.e., for radiological isolation).Keywords: APR1400, Cogeneration, Desalination, DE-TOP, IAEA, MED, MED-TVC, MSF, RO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 283783 Juxtaposing South Africa’s Private Sector and Its Public Service Regarding Innovation Diffusion, to Explore the Obstacles to E-Governance
Authors: Petronella Jonck, Freda van der Walt
Abstract:
Despite the benefits of innovation diffusion in the South African public service, implementation thereof seems to be problematic, particularly with regard to e-governance which would enhance the quality of service delivery, especially accessibility, choice, and mode of operation. This paper reports on differences between the public service and the private sector in terms of innovation diffusion. Innovation diffusion will be investigated to explore identified obstacles that are hindering successful implementation of e-governance. The research inquiry is underpinned by the diffusion of innovation theory, which is premised on the assumption that innovation has a distinct channel, time, and mode of adoption within the organisation. A comparative thematic document analysis was conducted to investigate organisational differences with regard to innovation diffusion. A similar approach has been followed in other countries, where the same conceptual framework has been used to guide document analysis in studies in both the private and the public sectors. As per the recommended conceptual framework, three organisational characteristics were emphasised, namely the external characteristics of the organisation, the organisational structure, and the inherent characteristics of the leadership. The results indicated that the main difference in the external characteristics lies in the focus and the clientele of the private sector. With regard to organisational structure, private organisations have veto power, which is not the case in the public service. Regarding leadership, similarities were observed in social and environmental responsibility and employees’ attitudes towards immediate supervision. Differences identified included risk taking, the adequacy of leadership development, organisational approaches to motivation and involvement in decision making, and leadership style. Due to the organisational differences observed, it is recommended that differentiated strategies be employed to ensure effective innovation diffusion, and ultimately e-governance. It is recommended that the results of this research be used to stimulate discussion on ways to improve collaboration between the mentioned sectors, to capitalise on the benefits of each sector.Keywords: E-governance, ICT, innovation diffusion, comparative analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178782 Time Temperature Dependence of Long Fiber Reinforced Polypropylene Manufactured by Direct Long Fiber Thermoplastic Process
Authors: K. A. Weidenmann, M. Grigo, B. Brylka, P. Elsner, T. Böhlke
Abstract:
In order to reduce fuel consumption, the weight of automobiles has to be reduced. Fiber reinforced polymers offer the potential to reach this aim because of their high stiffness to weight ratio. Additionally, the use of fiber reinforced polymers in automotive applications has to allow for an economic large-scale production. In this regard, long fiber reinforced thermoplastics made by direct processing offer both mechanical performance and processability in injection moulding and compression moulding. The work presented in this contribution deals with long glass fiber reinforced polypropylene directly processed in compression moulding (D-LFT). For the use in automotive applications both the temperature and the time dependency of the materials properties have to be investigated to fulfill performance requirements during crash or the demands of service temperatures ranging from -40 °C to 80 °C. To consider both the influence of temperature and time, quasistatic tensile tests have been carried out at different temperatures. These tests have been complemented by high speed tensile tests at different strain rates. As expected, the increase in strain rate results in an increase of the elastic modulus which correlates to an increase of the stiffness with decreasing service temperature. The results are in good accordance with results determined by dynamic mechanical analysis within the range of 0.1 to 100 Hz. The experimental results from different testing methods were grouped and interpreted by using different time temperature shift approaches. In this regard, Williams-Landel-Ferry and Arrhenius approach based on kinetics have been used. As the theoretical shift factor follows an arctan function, an empirical approach was also taken into consideration. It could be shown that this approach describes best the time and temperature superposition for glass fiber reinforced polypropylene manufactured by D-LFT processing.
Keywords: Composite, long fiber reinforced thermoplastics, mechanical properties, dynamic mechanical analysis, time temperature superposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169981 Potential of Native Microorganisms in Tagus Estuary
Authors: Ana C. Sousa, Beatriz C. Santos, Fátima N. Serralha
Abstract:
The Tagus estuary is heavily affected by industrial and urban activities, making bioremediation studies crucial for environmental preservation. Fuel contamination in the area can arise from various anthropogenic sources, such as oil spills from shipping, fuel storage and transfer operations, and industrial discharges. These pollutants can cause severe harm to the ecosystem and the organisms, including humans, that inhabit it. Nonetheless, there are always natural organisms with the ability to resist these pollutants and transform them into non-toxic or harmless substances, which defines the process of bioremediation. Exploring the microbial communities existing in soil and their capacity to break down hydrocarbons has the potential to enhance the development of more efficient bioremediation approaches. The aim of this investigation was to explore the existence of hydrocarbonoclastic microorganisms in six locations within the Tagus estuary, three on the north bank: Trancão River, Praia Fluvial do Cais das Colinas and Praia de Algés, and three on the south bank: Praia Fluvial de Alcochete, Praia Fluvial de Alburrica, and Praia da Trafaria. In all studied locations, native microorganisms of the genus Pseudomonas were identified. The bioremediation rate of common hydrocarbons like gasoline, hexane, and toluene was assessed using the redox indicator 2,6-dichlorophenolindophenol (DCPIP). Effective hydrocarbon-degrading bacterial strains were identified in all analyzed areas, despite adverse environmental conditions. The highest bioremediation rates were achieved for gasoline (68%) in Alburrica, hexane (65%) in Algés, and toluene (79%) in Algés. Generally, the bacteria demonstrated efficient degradation of hydrocarbons added to the culture medium, with higher rates of aerobic biodegradation of hydrocarbons observed. These findings underscore the necessity for further in situ studies to better comprehend the relationship between native microbial communities and the potential for pollutant degradation in soil.
Keywords: Biodegradability rate, hydrocarbonoclastic microorganisms, soil bioremediation, Tagus estuary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15680 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.
Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83079 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.
Keywords: Dependency, story-cost, cost modes, engineering demand parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101878 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.
Keywords: Average run length, Bernoulli CUSUM chart, beta binomial posterior predictive distribution, clinical indicator, health care organization, highest posterior density interval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87877 Development of Entrepreneurship in Industry on the Basis of Regulation of Transnational Production Chains in the Russian Arctic
Authors: E. N. Vetrova, L.V. Lapochkina, N. V. Nikulina
Abstract:
In the national economy, entrepreneurship plays the role of a buffer between economy and policy for it contributes to improving budget effectiveness and decreasing dependence of economy on the state. Entrepreneurship in industry makes it possible to increase the added value that is formed in production chains and to decrease dependence on import. Under the current circumstances, when sanctions are being imposed, this is especially relevant for Russia and for the realization of projects in the Russian Arctic. However, development of entrepreneurship in industry requires an enlightened state policy. The purpose of the research is elaboration of recommendations for improving economic effectiveness of the realization of the Arctic projects on the basis of conceptual proposals for the development of entrepreneurship in industry. The paper presents the studies of the extractive industry role in the Russian economy and proves its raw material character. The analysis of production chains in industry on the basis of the conception of the added value global chains demonstrated a low added value formed by Russian companies. The study of changes in the structure of economy based on systemic, statistical and comparative analyses revealed no positive changes in the structure of economy over the period under consideration. This is a manifestation of ineffectiveness of the Russian industrial policy in general and within the Arctic region in particular. The authors identified the problems information and implementation of the state industrial policy in the Arctic region and in the development of national entrepreneurship, analyzed the shortcomings of the current state policy in the sphere of the Russian industry. On the basis of the conducted studies, the authors formulated conceptual approaches to change the state policy in the Arctic. The basic idea of the authors is to substantiate the focus of the state regulation on the development of entrepreneurship in industry in the process of the Russian Arctic exploration. At the same time another problem is solved–that of the development of the manufacturing industry in the southern regions of the northwestern part of Russia. The criterion of effectiveness in this case is the economic effectiveness.
Keywords: Entrepreneurship in industry, global chains of the added value, government regulation, industrial policies, production chains in the Arctic region, economic effectiveness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 131276 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.
Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69975 Demonstration of Land Use Changes Simulation Using Urban Climate Model
Authors: Barbara Vojvodikova, Katerina Jupova, Iva Ticha
Abstract:
Cities in their historical evolution have always adapted their internal structure to the needs of society (for example protective city walls during classicism era lost their defense function, became unnecessary, were demolished and gave space for new features such as roads, museums or parks). Today it is necessary to modify the internal structure of the city in order to minimize the impact of climate changes on the environment of the population. This article discusses the results of the Urban Climate model owned by VITO, which was carried out as part of a project from the European Union's Horizon grant agreement No 730004 Pan-European Urban Climate Services Climate-Fit city. The use of the model was aimed at changes in land use and land cover in cities related to urban heat islands (UHI). The task of the application was to evaluate possible land use change scenarios in connection with city requirements and ideas. Two pilot areas in the Czech Republic were selected. One is Ostrava and the other Hodonín. The paper provides a demonstration of the application of the model for various possible future development scenarios. It contains an assessment of the suitability or inappropriateness of scenarios of future development depending on the temperature increase. Cities that are preparing to reconstruct the public space are interested in eliminating proposals that would lead to an increase in temperature stress as early as in the assignment phase. If they have evaluation on the unsuitability of some type of design, they can limit it into the proposal phases. Therefore, especially in the application of models on Local level - in 1 m spatial resolution, it was necessary to show which type of proposals would create a significant temperature island in its implementation. Such a type of proposal is considered unsuitable. The model shows that the building itself can create a shady place and thus contribute to the reduction of the UHI. If it sensitively approaches the protection of existing greenery, this new construction may not pose a significant problem. More massive interventions leading to the reduction of existing greenery create a new heat island space.
Keywords: Heat islands, land use, urban climate model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 839