Search results for: Agile Method
1202 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database
Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala
Abstract:
This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.
Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15471201 Inverse Heat Conduction Analysis of Cooling on Run Out Tables
Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi
Abstract:
In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.
Keywords: Inverse Analysis, Function Specification, Neural Net Works, Particle Swarm, Run Out Table.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16991200 A Numerical Study of Seismic Response of Shallow Square Tunnels in Two-Layered Ground
Authors: Mahmoud Hassanlourad, Mehran Naghizadehrokni, Vahid Molaei
Abstract:
In this study, the seismic behavior of a shallow tunnel with square cross section is investigated in a two layered and elastic heterogeneous environment using numerical method. To do so, FLAC finite difference software was used. Behavioral model of the ground and tunnel structure was assumed linear elastic. Dynamic load was applied to the model for 0.2 seconds from the bottom in form of a square pulse with maximum acceleration of 1 m/s2. The interface between the two layers was considered at three different levels of crest, middle, and bottom of the tunnel. The stiffness of the two upper and lower layers was considered to be varied from 10 MPa to 1000 MPa. Deformation of cross section of the tunnel due to dynamic load propagation, as well as the values of axial force and bending moment created in the tunnel structure, were examined in the three states mentioned above. The results of analyses show that heterogeneity of the environment, its stratification, and positioning of the interface of the two layers with respect to tunnel height and the stiffness ratio of the two layers have significant effects on the value of bending moment, axial force, and distortion of tunnel cross-section.Keywords: Dynamic analysis, shallow-buried tunnel, two-layered ground.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7691199 Design and Testing of Nanotechnology Based Sequential Circuits Using MX-CQCA Logic in VHDL
Authors: K. Maria Agnes, J. Joshua Bapu
Abstract:
This paper impart the design and testing of Nanotechnology based sequential circuits using multiplexer conservative QCA (MX-CQCA) logic gates, which is easily testable using only two vectors. This method has great prospective in the design of sequential circuits based on reversible conservative logic gates and also smashes the sequential circuits implemented in traditional gates in terms of testability. Reversible circuits are similar to usual logic circuits except that they are built from reversible gates. Designs of multiplexer conservative QCA logic based two vectors testable double edge triggered (DET) sequential circuits in VHDL language are also accessible here; it will also diminish intricacy in testing side. Also other types of sequential circuits such as D, SR, JK latches are designed using this MX-CQCA logic gate. The objective behind the proposed design methodologies is to amalgamate arithmetic and logic functional units optimizing key metrics such as garbage outputs, delay, area and power. The projected MX-CQCA gate outshines other reversible gates in terms of the intricacy, delay.
Keywords: Conservative logic, Double edge triggered (DET) flip flop, majority voters, MX-CQCA gate, reversible logic, Quantum dot Cellular automata.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22921198 Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique
Authors: Kamarul Hawari Ghazali, Mohd. Marzuki Mustafa, Aini Hussain
Abstract:
Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the fields. This paper discusses an automatic weeding strategy in oil palm plantation using machine vision system for the detection and differential spraying of weeds. The implementation of vision system involved the used of image processing technique to analyze weed images in order to recognized and distinguished its types. Image filtering technique has been used to process the images as well as a feature extraction method to classify the type of weed images. As a result, the image processing technique contributes a promising result of classification to be implemented in machine vision system for automated weeding strategy.Keywords: Machine vision, Automatic Weeding Strategy, filter, feature extraction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18661197 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17131196 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption
Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif
Abstract:
Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.Keywords: Battery endurance, software metrics, mobile application, power consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19431195 Perceptions of Greenhouse Vegetable Growers Regarding Use of Biological Control Practices: A Case Study in Jiroft County, Iran
Authors: Hossein Shabanali Fami, Omid Sharifi, Javad Ghasemi, Mahtab Pouratashi, Mona Sadat Moghadasian
Abstract:
The main purpose of this study was to investigate perception of greenhouse vegetable growers regarding use of biological control practices during the growing season. The statistical population of the study included greenhouse vegetable growers in Jiroft county (N=1862). A sample of 137 vegetable growers was selected, using random sampling method. Data were collected via a questionnaire. The validity of the instrument was obtained by the faculty members of the Department of Agricultural Development and Management in the University of Tehran. Cronbach’s alpha was applied to estimate the reliability which showed a high reliability for the instrument. Data was analyzed using SPSS/Windows 13.5. The results revealed that greenhouse vegetable growers had moderate level of perception regarding biological control practices. Levels of vegetable growers’ perceptions regarding biological control practices were different on the basis of their academic qualifications as well as educational level and job. In addition, the results indicated that about 54.1% of variations in vegetable growers’ perceptions could be explained by variables such as awareness of biological control practices, knowledge on pests, annual production and age.Keywords: Greenhouse, biological control, biological agents, perception, vegetable grower.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17051194 Role of Organic Wastewater Constituents in Iron Redox Cycling for Ferric Sludge Reuse in the Fenton-Based Treatment
Authors: J. Bolobajev, M. Trapido, A. Goi
Abstract:
The practical application of the Fenton-based treatment method for organic compounds-contaminated water purification is limited mainly because of the large amount of ferric sludge formed during the treatment, where ferrous iron (Fe(II)) is used as the activator of the hydrogen peroxide oxidation processes. Reuse of ferric sludge collected from clarifiers to substitute Fe(II) salts allows reducing the total cost of Fenton-type treatment technologies and minimizing the accumulation of hazardous ferric waste. Dissolution of ferric iron (Fe(III)) from the sludge to liquid phase at acidic pH and autocatalytic transformation of Fe(III) to Fe(II) by phenolic compounds (tannic acid, lignin, phenol, catechol, pyrogallol and hydroquinone) added or present as water/wastewater constituents were found to be essentially involved in the Fenton-based oxidation mechanism. Observed enhanced formation of highly reactive species, hydroxyl radicals, resulted in a substantial organic contaminant degradation increase. Sludge reuse at acidic pH and in the presence of ferric iron reductants is a novel strategy in the Fenton-based treatment application for organic compounds-contaminated water purification.
Keywords: Ferric sludge reuse, ferric iron reductant, water treatment, organic pollutant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16691193 Lean Environmental Management Integration System (LEMIS) Framework Development
Authors: Puvanasvaran, A. P., Suresh V., N. Norazlin
Abstract:
The Lean Environmental Management Integration System (LEMIS) framework development is integration between lean core element and ISO 14001. The curiosity on the relationship between continuous improvement and sustainability of lean implementation has influenced this study toward LEMIS. Characteristic of ISO 14001 standard clauses and core elements of lean principles are explored from past studies and literature reviews. Survey was carried out on ISO 14001 certified companies to examine continual improvement by implementing the ISO 14001 standard. The study found that there is a significant and positive relationship between Lean Principles: value, value stream, flow, pull and perfection with the ISO 14001 requirements. LEMIS is significant to support the continuous improvement and sustainability. The integration system can be implemented to any manufacturing company. It gives awareness on the importance on why organizations need to sustain its environmental management system. In the meantime, the lean principle can be adapted in order to streamline daily activities of the company. Throughout the study, it had proven that there is no sacrifice or trade-off between lean principles with ISO 14001 requirements. The framework developed in the study can be further simplified in the future, especially the method of crossing each sub requirements of ISO 14001 standard with the core elements of Lean principles in this study.
Keywords: LEMIS, ISO 14001, integration, framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23841192 Modeling and Simulating Reaction-Diffusion Systems with State-Dependent Diffusion Coefficients
Authors: Paola Lecca, Lorenzo Dematte, Corrado Priami
Abstract:
The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.
Keywords: Reaction-diffusion systems, diffusion coefficient, stochastic simulation algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15251191 Building Information Modeling-Based Approach for Automatic Quantity Take-off and Cost Estimation
Authors: Lo Kar Yin, Law Ka Mei
Abstract:
Architectural, engineering, construction and operations (AECO) industry practitioners have been well adapting to the dynamic construction market from the fundamental training of its disciplines. As further triggered by the pandemic since 2019, great steps are taken in virtual environment and the best collaboration is strived with project teams without boundaries. With adoption of Building Information Modeling-based approach and qualitative analysis, this paper is to review quantity take-off (QTO) and cost estimation process through modeling techniques in liaison with suppliers, fabricators, subcontractors, contractors, designers, consultants and services providers in the construction industry value chain for automatic project cost budgeting, project cost control and cost evaluation on design options of in-situ reinforced-concrete construction and Modular Integrated Construction (MiC) at design stage, variation of works and cash flow/spending analysis at construction stage as far as practicable, with a view to sharing the findings for enhancing mutual trust and co-operation among AECO industry practitioners. It is to foster development through a common prototype of design and build project delivery method in NEC4 Engineering and Construction Contract (ECC) Options A and C.
Keywords: Building Information Modeling, cost estimation, quantity take-off, modeling techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7111190 A Simulation Method to Find the Optimal Design of Photovoltaic Home System in Malaysia, Case Study: A Building Integrated Photovoltaic in Putra Jaya
Authors: Riza Muhida, Maisarah Ali, Puteri Shireen Jahn Kassim, Muhammad Abu Eusuf, Agus G.E. Sutjipto, Afzeri
Abstract:
Over recent years, the number of building integrated photovoltaic (BIPV) installations for home systems have been increasing in Malaysia. The paper concerns an analysis - as part of current Research and Development (R&D) efforts - to integrate photovoltaics as an architectural feature of a detached house in the new satellite township of Putrajaya, Malaysia. The analysis was undertaken using calculation and simulation tools to optimize performance of BIPV home system. In this study, a the simulation analysis was undertaken for selected bungalow units based on a long term recorded weather data for city of Kuala Lumpur. The simulation and calculation was done with consideration of a PV panels' tilt and direction, shading effect and economical considerations. A simulation of the performance of a grid connected BIPV house in Kuala Lumpur was undertaken. This case study uses a 60 PV modules with power output of 2.7 kW giving an average of PV electricity output is 255 kWh/month..
Keywords: Building integrated photovoltaic, Malaysia, Simulation, panels' tilt and direction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22931189 The Effect of Board Composition and Ownership Concentration on Earnings Management: Evidence from IRAN
Authors: F. Rahnamay Roodposhti, S. A. Nabavi Chashmi
Abstract:
The role of corporate governance is to reduce the divergence of interests between shareholders and managers. The role of corporate governance is more useful when managers have an incentive to deviate from shareholders- interests. One example of management-s deviation from shareholders- interests is the management of earnings through the use of accounting accruals. This paper examines the association between corporate governance internal mechanisms ownership concentration, board independence, the existence of CEO-Chairman duality and earnings management. Firm size and leverage are control variables. The population used in this study comprises firms listed on the Tehran Stock Exchange (TSE) between 2004 and 2008, the sample comprises 196 firms. Panel Data method is employed as technique to estimate the model. We find that there is negative significant association between ownership concentration and board independence manage earnings with earnings management, there is negative significant association between the existence of CEO-Chairman duality and earnings management. This study also found a positive significant association between control variable (firm size and leverage) and earnings management.Keywords: Earnings management, board independence, ownership concentration, corporate governance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39191188 Characterization and Geochemical Modeling of Cu and Zn Sorption Using Mixed Mineral Systems Injected with Iron Sulfide under Sulfidic-Anoxic Conditions I: Case Study of Cwmheidol Mine Waste Water, Wales, United Kingdom
Authors: D. E. Egirani, J. E. Andrews, A. R. Baker
Abstract:
This study investigates sorption of Cu and Zn contained in natural mine wastewater, using mixed mineral systems in sulfidic-anoxic condition. The mine wastewater was obtained from disused mine workings at Cwmheidol in Wales, United Kingdom. These contaminants flow into water courses. These water courses include River Rheidol. In this River fishing activities exist. In an attempt to reduce Cu-Zn levels of fish intake in the watercourses, single mineral systems and 1:1 mixed mineral systems of clay and goethite were tested with the mine waste water for copper and zinc removal at variable pH. Modelling of hydroxyl complexes was carried out using phreeqc method. Reactions using batch mode technique was conducted at room temperature. There was significant differences in the behaviour of copper and zinc removal using mixed mineral systems when compared to single mineral systems. All mixed mineral systems sorb more Cu than Zn when tested with mine wastewater.
Keywords: Cu- Zn, hydroxyl complexes, kinetics, mixed mineral systems, reactivity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9251187 Effect of Rubber Treatment on Compressive Strength and Modulus of Elasticity of Self-Compacting Rubberized Concrete
Authors: I. Miličević, M. Hadzima Nyarko, R. Bušić, J. Simonović Radosavljević, M. Prokopijević, K. Vojisavljević
Abstract:
This paper investigates the effects of different treatment methods of rubber aggregates for self-compacting concrete (SCC) on compressive strength and modulus of elasticity. SCC mixtures with 10% replacement of fine aggregate with crumb rubber by total aggregate volume and with different aggregate treatment methods were investigated. The rubber aggregate was treated in three different methods: dry process, water-soaking, and NaOH treatment plus water soaking. Properties of SCC in a fresh and hardened state were tested and evaluated. Scanning electron microscope (SEM) analysis of three different SCC patches were made and discussed. It was observed that applying the proposed NaOH plus water soaking method resulted in the improvement of fresh and hardened concrete properties. It resulted in a more uniform distribution of rubber particles in the cement matrix, a better bond between rubber particles and the cement matrix, and higher compressive strength of SCC rubberized concrete.
Keywords: Compressive strength, modulus of elasticity, NaOH treatment, rubber aggregate, self-compacting rubberized concrete, scanning electron microscope analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6411186 Strongly Coupled Finite Element Formulation of Electromechanical Systems with Integrated Mesh Morphing using Radial Basis Functions
Authors: D. Kriebel, J. E. Mehner
Abstract:
The paper introduces a method to efficiently simulate nonlinear changing electrostatic fields occurring in micro-electromechanical systems (MEMS). Large deflections of the capacitor electrodes usually introduce nonlinear electromechanical forces on the mechanical system. Traditional finite element methods require a time-consuming remeshing process to capture exact results for this physical domain interaction. In order to accelerate the simulation process and eliminate the remeshing process, a formulation of a strongly coupled electromechanical transducer element will be introduced which uses a combination of finite-element with an advanced mesh morphing technique using radial basis functions (RBF). The RBF allows large geometrical changes of the electric field domain while retain high element quality of the deformed mesh. Coupling effects between mechanical and electrical domains are directly included within the element formulation. Fringing field effects are described accurate by using traditional arbitrary shape functions.
Keywords: electromechanical, electric field, transducer, simulation, modeling, finite-element, mesh morphing, radial basis function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5171185 Investigating Daylight Quality in Malaysian Government Office Buildings Through Daylight Factor and Surface Luminance
Authors: Mohd Zin Kandar, Mohd Sabere Sulaiman, Yong Razidah Rashid, Dilshan Remaz Ossen, Aminatuzuhariah MAbdullah, Lim Yaik Wah, Mansour Nikpour
Abstract:
In recent years, there has been an increasing interest in using daylight to save energy in buildings. In tropical regions, daylighting is always an energy saver. On the other hand, daylight provides visual comfort. According to standards, it shows that many criteria should be taken into consideration in order to have daylight utilization and visual comfort. The current standard in Malaysia, MS 1525 does not provide sufficient guideline. Hence, more research is needed on daylight performance. If architects do not consider daylight design, it not only causes inconvenience in working spaces but also causes more energy consumption as well as environmental pollution. This research had surveyed daylight performance in 5 selected office buildings from different area of Malaysian through experimental method. Several parameters of daylight quality such as daylight factor, surface luminance and surface luminance ratio were measured in different rooms in each building. The result of this research demonstrated that most of the buildings were not designed for daylight utilization. Therefore, it is very important that architects follow the daylight design recommendation to reduce consumption of electric power for artificial lighting while the sufficient quality of daylight is available.
Keywords: Daylight factor, Field measurement, Daylighting quality, Tropical
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34841184 Contribution of Electrochemical Treatment in Treating Textile Dye Wastewater
Authors: Usha N. Murthy, Rekha H. B., Mahaveer Devoor
Abstract:
The introduction of more stringent pollution regulations, in relation to financial and social pressures for sustainable development, has pressed toward limiting the volumes of industrial and domestic effluents discharged into the environment - as well as to increase the efforts within research and development of new or more efficient wastewater treatment technologies. Considering both discharge volume and effluent composition, wastewater generated by the textile industry is rated as the most polluting among all industrial sectors. The pollution load is mainly due to spent dye baths, which are composed of unreacted dyes, dispersing agents, surfactants, salts and organics. In the present investigation, the textile dye wastewater was characterized by high color, chemical oxygen demand (COD), total dissolved solids (TDS) and pH. Electrochemical oxidation process for four plate electrodes was carried out at five different current intensities, out of which 0.14A has achieved maximum percentage removal of COD with 75% and 83% of color. The COD removal rate in kg COD/h/m2 decreases with increase in the current intensity. The energy consumption increases with increase in the current intensity. Hence, textile dye wastewater can be effectively pretreated by electrochemical oxidation method where the process limits objectionable color while leaving the COD associated with organics left for natural degradation thus causing a sustainable reduction in pollution load.
Keywords: Electrochemical treatment, COD, color.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23941183 Implementation-Oriented Discussion for Historical and Cultural Villages’ Conservation Planning
Authors: Xing Zhang
Abstract:
Since the State Council of China issued the Regulations on the Conservation of Historical Cultural Towns and Villages in 2008, formulation of conservation planning has been carried out in national, provincial and municipal historical and cultural villages for protection needs, which provides a legal basis for inheritance of historical culture and protection of historical resources. Although the quantity and content of the conservation planning are continually increasing, the implementation and application are still ambiguous. To solve the aforementioned problems, this paper explores methods to enhance the implementation of conservation planning from the perspective of planning formulation. Specifically, the technical framework of "overall objectives planning - sub-objectives planning - zoning guidelines - implementation by stages" is proposed to implement the planning objectives in different classifications and stages. Then combined with details of the Qiqiao historical and cultural village conservation planning project in Ningbo, five sub-objectives are set, which are implemented through the village zoning guidelines. At the same time, the key points and specific projects in the near-term, medium-term and long-term work are clarified, and the spatial planning is transformed into the action plan with time scale. The proposed framework and method provide a reference for the implementation and management of the conservation planning of historical and cultural villages in the future.Keywords: Conservation planning, planning by stages, planning implementation, zoning guidelines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7821182 Optimizing Dialogue Strategy Learning Using Learning Automata
Authors: G. Kumaravelan, R. Sivakumar
Abstract:
Modeling the behavior of the dialogue management in the design of a spoken dialogue system using statistical methodologies is currently a growing research area. This paper presents a work on developing an adaptive learning approach to optimize dialogue strategy. At the core of our system is a method formalizing dialogue management as a sequential decision making under uncertainty whose underlying probabilistic structure has a Markov Chain. Researchers have mostly focused on model-free algorithms for automating the design of dialogue management using machine learning techniques such as reinforcement learning. But in model-free algorithms there exist a dilemma in engaging the type of exploration versus exploitation. Hence we present a model-based online policy learning algorithm using interconnected learning automata for optimizing dialogue strategy. The proposed algorithm is capable of deriving an optimal policy that prescribes what action should be taken in various states of conversation so as to maximize the expected total reward to attain the goal and incorporates good exploration and exploitation in its updates to improve the naturalness of humancomputer interaction. We test the proposed approach using the most sophisticated evaluation framework PARADISE for accessing to the railway information system.Keywords: Dialogue management, Learning automata, Reinforcement learning, Spoken dialogue system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16111181 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.
Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961180 Investigating the Effective Parameters in Determining the Type of Traffic Congestion Pricing Schemes in Urban Streets
Authors: Saeed Sayyad Hagh Shomar
Abstract:
Traffic congestion pricing – as a strategy in travel demand management in urban areas to reduce traffic congestion, air pollution and noise pollution – has drawn many attentions towards itself. Unlike the satisfying findings in this method, there are still problems in determining the best functional congestion pricing scheme with regard to the situation. The so-called problems in this process will result in further complications and even the scheme failure. That is why having proper knowledge of the significance of congestion pricing schemes and the effective factors in choosing them can lead to the success of this strategy. In this study, first, a variety of traffic congestion pricing schemes and their components are introduced; then, their functional usage is discussed. Next, by analyzing and comparing the barriers, limitations and advantages, the selection criteria of pricing schemes are described. The results, accordingly, show that the selection of the best scheme depends on various parameters. Finally, based on examining the effective parameters, it is concluded that the implementation of area-based schemes (cordon and zonal) has been more successful in non-diversion of traffic. That is considering the topology of the cities and the fact that traffic congestion is often created in the city centers, area-based schemes would be notably functional and appropriate.Keywords: Congestion pricing, demand management, flat toll, variable toll.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6091179 Early Diagnosis of Alzheimer's Disease Using a Combination of Images Processing and Brain Signals
Authors: E. Irankhah, M. Zarif, E. Mazrooei Rad, K. Ghandehari
Abstract:
Alzheimer's prevalence is on the rise, and the disease comes with problems like cessation of treatment, high cost of treatment, and the lack of early detection methods. The pathology of this disease causes the formation of protein deposits in the brain of patients called plaque amyloid. Generally, the diagnosis of this disease is done by performing tests such as a cerebrospinal fluid, CT scan, MRI, and spinal cord fluid testing, or mental testing tests and eye tracing tests. In this paper, we tried to use the Medial Temporal Atrophy (MTA) method and the Leave One Out (LOO) cycle to extract the statistical properties of the three Fz, Pz, and Cz channels of ERP signals for early diagnosis of this disease. In the process of CT scan images, the accuracy of the results is 81% for the healthy person and 88% for the severe patient. After the process of ERP signaling, the accuracy of the results for a healthy person in the delta band in the Cz channel is 81% and in the alpha band the Pz channel is 90%. In the results obtained from the signal processing, the results of the severe patient in the delta band of the Cz channel were 89% and in the alpha band Pz channel 92%.
Keywords: Alzheimer's disease, image and signal processing, medial temporal atrophy, LOO Cycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20501178 Optimization of Hemp Fiber Reinforced Concrete for Mix Design Method
Authors: Zoe Chang, Max Williams, Gautham Das
Abstract:
The purpose of this study is to evaluate the incorporation of hemp fibers (HF) in concrete. Hemp fiber reinforced concrete (HFRC) is becoming more popular as an alternative for regular mix designs. This study was done to evaluate the compressive strength of HFRC regarding mix procedure. HF were obtained from the manufacturer and hand processed to ensure uniformity in width and length. The fibers were added to concrete as both wet and dry mix to investigate and optimize the mix design process. Results indicated that the dry mix had a compressive strength of 1157 psi compared to the wet mix of 985 psi. This dry mix compressive strength was within range of the standard mix compressive strength of 1533 psi. The statistical analysis revealed that the mix design process needs further optimization and uniformity concerning the addition of HF. Regression analysis revealed that the standard mix design had a coefficient of 0.9 as compared to the dry mix of 0.375 indicating a variation in the mixing process. While completing the dry mix, the addition of plain HF caused them to intertwine creating lumps and inconsistency. However, during the wet mixing process, combining water and HF before incorporation allows the fibers to uniformly disperse within the mix hence the regression analysis indicated a better coefficient of 0.55. This study concludes that HRFC is a viable alternative to regular mixes however more research surrounding its characteristics needs to be conducted.
Keywords: hemp fibers, hemp reinforced concrete, wet and dry, freeze thaw testing, compressive strength
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5571177 Blind Image Deconvolution by Neural Recursive Function Approximation
Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu
Abstract:
This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20611176 Modelling and Simulation of Cascaded H-Bridge Multilevel Single Source Inverter Using PSIM
Authors: Gaddafi S. Shehu, T. Yalcinoz, Abdullahi B. Kunya
Abstract:
Multilevel inverters such as flying capacitor, diodeclamped, and cascaded H-bridge inverters are very popular particularly in medium and high power applications. This paper focuses on a cascaded H-bridge module using a single direct current (DC) source in order to generate an 11-level output voltage. The noble approach reduces the number of switches and gate drivers, in comparison with a conventional method. The anticipated topology produces more accurate result with an isolation transformer at high switching frequency. Different modulation techniques can be used for the multilevel inverter, but this work features modulation techniques known as selective harmonic elimination (SHE).This modulation approach reduces the number of carriers with reduction in Switching Losses, Total Harmonic Distortion (THD), and thereby increasing Power Quality (PQ). Based on the simulation result obtained, it appears SHE has the ability to eliminate selected harmonics by chopping off the fundamental output component. The performance evaluation of the proposed cascaded multilevel inverter is performed using PSIM simulation package and THD of 0.94% is obtained.
Keywords: Cascaded H-bridge Multilevel Inverter, Power Quality, Selective Harmonic Elimination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50961175 Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images
Authors: Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, Goutam Kumar Ghorai, Gautam Sarkar, Ashis K. Dhara
Abstract:
Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques.Keywords: Ocular diseases, retinal fundus image, optic disc detection and segmentation, fully convolutional network, overlap measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7801174 Use of Fuzzy Logic in the Corporate Reputation Assessment: Stock Market Investors’ Perspective
Authors: Tomasz L. Nawrocki, Danuta Szwajca
Abstract:
The growing importance of reputation in building enterprise value and achieving long-term competitive advantage creates the need for its measurement and evaluation for the management purposes (effective reputation and its risk management). The paper presents practical application of self-developed corporate reputation assessment model from the viewpoint of stock market investors. The model has a pioneer character and example analysis performed for selected industry is a form of specific test for this tool. In the proposed solution, three aspects - informational, financial and development, as well as social ones - were considered. It was also assumed that the individual sub-criteria will be based on public sources of information, and as the calculation apparatus, capable of obtaining synthetic final assessment, fuzzy logic will be used. The main reason for developing this model was to fulfill the gap in the scope of synthetic measure of corporate reputation that would provide higher degree of objectivity by relying on "hard" (not from surveys) and publicly available data. It should be also noted that results obtained on the basis of proposed corporate reputation assessment method give possibilities of various internal as well as inter-branch comparisons and analysis of corporate reputation impact.Keywords: Corporate reputation, fuzzy logic, fuzzy model, stock market investors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13711173 Model Canvas and Process for Educational Game Design in Outcome-Based Education
Authors: Ratima Damkham, Natasha Dejdumrong, Priyakorn Pusawiro
Abstract:
This paper explored the solution in game design to help game designers in the educational game designing using digital educational game model canvas (DEGMC) and digital educational game form (DEGF) based on Outcome-based Education program. DEGMC and DEGF can help designers develop an overview of the game while designing and planning their own game. The way to clearly assess players’ ability from learning outcomes and support their game learning design is by using the tools. Designers can balance educational content and entertainment in designing a game by using the strategies of the Business Model Canvas and design the gameplay and players’ ability assessment from learning outcomes they need by referring to the Constructive Alignment. Furthermore, they can use their design plan in this research to write their Game Design Document (GDD). The success of the research was evaluated by four experts’ perspectives in the education and computer field. From the experiments, the canvas and form helped the game designers model their game according to the learning outcomes and analysis of their own game elements. This method can be a path to research an educational game design in the future.Keywords: Constructive alignment, constructivist theory, educational game, outcome-based education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 851