Search results for: Semantic technologies Sensor networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3578

Search results for: Semantic technologies Sensor networks

158 Adaptive Design of Large Prefabricated Concrete Panels Collective Housing

Authors: Daniel M. Muntean, Viorel Ungureanu

Abstract:

More than half of the urban population in Romania lives today in residential buildings made out of large prefabricated reinforced concrete panels. Since their initial design was made in the 1960’s, these housing units are now being technically and morally outdated, consuming large amounts of energy for heating, cooling, ventilation and lighting, while failing to meet the needs of the contemporary life-style. Due to their widespread use, the design of a system that improves their energy efficiency would have a real impact, not only on the energy consumption of the residential sector, but also on the quality of life that it offers. Furthermore, with the transition of today’s existing power grid to a “smart grid”, buildings could become an active element for future electricity networks by contributing in micro-generation and energy storage. One of the most addressed issues today is to find locally adapted strategies that can be applied considering the 20-20-20 EU policy criteria and to offer sustainable and innovative solutions for the cost-optimal energy performance of buildings adapted on the existing local market. This paper presents a possible adaptive design scenario towards sustainable retrofitting of these housing units. The apartments are transformed in order to meet the current living requirements and additional extensions are placed on top of the building, replacing the unused roof space, acting not only as housing units, but as active solar energy collection systems. An adaptive building envelope is ensured in order to achieve overall air-tightness and an elevator system is introduced to facilitate access to the upper levels.

Keywords: Adaptive building, energy efficiency, retrofitting, residential buildings, smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024
157 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas

Authors: Sheetal Sharma

Abstract:

Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in  computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.

Keywords: Geology, runoff, urban planning, land use-land cover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1304
156 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2773
155 Using Artificial Neural Network and Leudeking-Piret Model in the Kinetic Modeling of Microbial Production of Poly-β- Hydroxybutyrate

Authors: A.Qaderi, A. Heydarinasab, M. Ardjmand

Abstract:

Poly-β-hydroxybutyrate (PHB) is one of the most famous biopolymers that has various applications in production of biodegradable carriers. The most important strategy for enhancing efficiency in production process and reducing the price of PHB, is the accurate expression of kinetic model of products formation and parameters that are effective on it, such as Dry Cell Weight (DCW) and substrate consumption. Considering the high capabilities of artificial neural networks in modeling and simulation of non-linear systems such as biological and chemical industries that mainly are multivariable systems, kinetic modeling of microbial production of PHB that is a complex and non-linear biological process, the three layers perceptron neural network model was used in this study. Artificial neural network educates itself and finds the hidden laws behind the data with mapping based on experimental data, of dry cell weight, substrate concentration as input and PHB concentration as output. For training the network, a series of experimental data for PHB production from Hydrogenophaga Pseudoflava by glucose carbon source was used. After training the network, two other experimental data sets that have not intervened in the network education, including dry cell concentration and substrate concentration were applied as inputs to the network, and PHB concentration was predicted by the network. Comparison of predicted data by network and experimental data, indicated a high precision predicted for both fructose and whey carbon sources. Also in present study for better understanding of the ability of neural network in modeling of biological processes, microbial production kinetic of PHB by Leudeking-Piret experimental equation was modeled. The Observed result indicated an accurate prediction of PHB concentration by artificial neural network higher than Leudeking- Piret model.

Keywords: Kinetic Modeling, Poly-β-Hydroxybutyrate (PHB), Hydrogenophaga Pseudoflava, Artificial Neural Network, Leudeking-Piret

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4800
154 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
153 Decision-Making Strategies on Smart Dairy Farms: A Review

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh

Abstract:

Farm management and operations will drastically change due to access to real-time data, real-time forecasting and tracking of physical items in combination with Internet of Things (IoT) developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm decision-making process does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyze on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue and environmental impact. Evolutionary Computing (EC) can be very effective in finding the optimal combination of sets of some objects and finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and EC in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management and its uptake has become a continuing trend.

Keywords: Big data, evolutionary computing, cloud, precision technologies

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
152 Artificial Intelligent in Optimization of Steel Moment Frame Structures: A Review

Authors: Mohsen Soori, Fooad Karimi Ghaleh Jough

Abstract:

The integration of Artificial Intelligence (AI) techniques in the optimization of steel moment frame structures represents a transformative approach to enhance the design, analysis, and performance of these critical engineering systems. The review encompasses a wide spectrum of AI methods, including machine learning algorithms, evolutionary algorithms, neural networks, and optimization techniques, applied to address various challenges in the field. The synthesis of research findings highlights the interdisciplinary nature of AI applications in structural engineering, emphasizing the synergy between domain expertise and advanced computational methodologies. This synthesis aims to serve as a valuable resource for researchers, practitioners, and policymakers seeking a comprehensive understanding of the state-of-the-art in AI-driven optimization for steel moment frame structures. The paper commences with an overview of the fundamental principles governing steel moment frame structures and identifies the key optimization objectives, such as efficiency of structures. Subsequently, it delves into the application of AI in the conceptual design phase, where algorithms aid in generating innovative structural configurations and optimizing material utilization. The review also explores the use of AI for real-time structural health monitoring and predictive maintenance, contributing to the long-term sustainability and reliability of steel moment frame structures. Furthermore, the paper investigates how AI-driven algorithms facilitate the calibration of structural models, enabling accurate prediction of dynamic responses and seismic performance. Thus, by reviewing and analyzing the recent achievements in applications artificial intelligent in optimization of steel moment frame structures, the process of designing, analysis, and performance of the structures can be analyzed and modified.

Keywords: Artificial Intelligent, optimization process, steel moment frame, structural engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191
151 Deregulation of Turkish State Railways Based on Public-Private Partnership Approaches

Authors: S. Shakibaei, P. Alpkokin

Abstract:

The railway network is one of the major components of a transportation system in a country which may be an indicator of the country’s level of economic improvement. Since 2000s on, revival of national railways and development of High Speed Rail (HSR) lines are one of the most remarkable policies of Turkish government in railway sector. Within this trend, the railway age is to be revived and coming decades will be a golden opportunity. Indubitably, major infrastructures such as road and railway networks require sizeable investment capital, precise maintenance and reparation. Traditionally, governments are held responsible for funding, operating and maintaining these infrastructures. However, lack or shortage of financial resources, risk responsibilities (particularly cost and time overrun), and in some cases inefficacy in constructional, operational and management phases persuade governments to find alternative options. Financial power, efficient experiences and background of private sector are the factors convincing the governments to make a collaboration with private parties to develop infrastructures. Public-Private Partnerships (PPP or 3P or P3) and related regulatory issues are born considering these collaborations. In Turkey, PPP approaches have attracted attention particularly during last decade and these types of investments have been accelerated by government to overcome budget limitations and cope with inefficacy of public sector in improving transportation network and its operation. This study mainly tends to present a comprehensive overview of PPP concept, evaluate the regulatory procedure in Europe and propose a general framework for Turkish State Railways (TCDD) as an outlook on privatization, liberalization and deregulation of railway network.

Keywords: Deregulation, high-speed rail, liberalization, privatization, public-private partnership.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
150 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
149 Integrated Design in Additive Manufacturing Based on Design for Manufacturing

Authors: E. Asadollahi-Yazdi, J. Gardan, P. Lafon

Abstract:

Nowadays, manufactures are encountered with production of different version of products due to quality, cost and time constraints. On the other hand, Additive Manufacturing (AM) as a production method based on CAD model disrupts the design and manufacturing cycle with new parameters. To consider these issues, the researchers utilized Design For Manufacturing (DFM) approach for AM but until now there is no integrated approach for design and manufacturing of product through the AM. So, this paper aims to provide a general methodology for managing the different production issues, as well as, support the interoperability with AM process and different Product Life Cycle Management tools. The problem is that the models of System Engineering which is used for managing complex systems cannot support the product evolution and its impact on the product life cycle. Therefore, it seems necessary to provide a general methodology for managing the product’s diversities which is created by using AM. This methodology must consider manufacture and assembly during product design as early as possible in the design stage. The latest approach of DFM, as a methodology to analyze the system comprehensively, integrates manufacturing constraints in the numerical model in upstream. So, DFM for AM is used to import the characteristics of AM into the design and manufacturing process of a hybrid product to manage the criteria coming from AM. Also, the research presents an integrated design method in order to take into account the knowledge of layers manufacturing technologies. For this purpose, the interface model based on the skin and skeleton concepts is provided, the usage and manufacturing skins are used to show the functional surface of the product. Also, the material flow and link between the skins are demonstrated by usage and manufacturing skeletons. Therefore, this integrated approach is a helpful methodology for designer and manufacturer in different decisions like material and process selection as well as, evaluation of product manufacturability.

Keywords: Additive manufacturing, 3D printing, design for manufacturing, integrated design, interoperability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
148 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1175
147 Predictions and Comparisons of Thermohydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer-Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings (GFBs) are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional (3D) fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.

Keywords: Fluid structure interaction multi-physics simulations, gas foil bearing, oil-free, transient thermohydrodynamic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 438
146 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates Connected and Autonomous Vehicles (CAVs) fuel consumption and air pollutants including Carbon Monoxide (CO), Particulate Matter (PM), and Nitrogen Oxides (NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: Connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
145 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359
144 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2217
143 Authentic Learning for Computer Network with Mobile Device-Based Hands-On Labware

Authors: Kai Qian, Ming Yang, Minzhe Guo, Prabir Bhattacharya, Lixin Tao

Abstract:

Computer network courses are essential parts of college computer science curriculum and hands-on networking experience is well recognized as an effective approach to help students understand better about the network concepts, the layered architecture of network protocols, and the dynamics of the networks. However, existing networking labs are usually server-based and relatively cumbersome, which require a certain level of specialty and resource to set up and maintain the lab environment. Many universities/colleges lack the resources and build-ups in this field and have difficulty to provide students with hands-on practice labs. A new affordable and easily-adoptable approach to networking labs is desirable to enhance network teaching and learning. In addition, current network labs are short on providing hands-on practice for modern wireless and mobile network learning. With the prevalence of smart mobile devices, wireless and mobile network are permeating into various aspects of our information society. The emerging and modern mobile technology provides computer science students with more authentic learning experience opportunities especially in network learning. A mobile device based hands-on labware can provide an excellent ‘real world’ authentic learning environment for computer network especially for wireless network study. In this paper, we present our mobile device-based hands-on labware (series of lab module) for computer network learning which is guided by authentic learning principles to immerse students in a real world relevant learning environment. We have been using this labware in teaching computer network, mobile security, and wireless network classes. The student feedback shows that students can learn more when they have hands-on authentic learning experience. 

Keywords: Mobile computing, android, network, labware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
142 A Settlement Strategy for Health Facilities in Emerging Countries: A Case Study in Brazil

Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Piero Favino, Luca Preis

Abstract:

A settlement strategy is to anticipate and respond the needs of existing and future communities through the provision of primary health care facilities in marginalized areas. Access to a health care network is important to improving healthcare coverage, often lacking, in developing countries. The study explores that a good sanitary system strategy of rural contexts brings advantages to an existing settlement: improving transport, communication, water and social facilities. The objective of this paper is to define a possible methodology to implement primary health care facilities in disadvantaged areas of emerging countries. In this research, we analyze the case study of Lauro de Freitas, a municipality in the Brazilian state of Bahia, part of the Metropolitan Region of Salvador, with an area of 57,662 km² and 194.641 inhabitants. The health localization system in Lauro de Freitas is an integrated process that involves not only geographical aspects, but also a set of factors: population density, epidemiological data, allocation of services, road networks, and more. Data were collected also using semi-structured interviews and questionnaires to the local population. Synthesized data suggest that moving away from the coast where there is the greatest concentration of population and services, a network of primary health care facilities is able to improve the living conditions of small-dispersed communities. Based on the health service needs of populations, we have developed a methodological approach that is particularly useful in rural and remote contexts in emerging countries.

Keywords: Primary health care, developing countries, policy health planning, settlement strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 969
141 The Potential Use of Nanofilters to Supply Potable Water in Persian Gulf and Oman Sea Watershed Basin

Authors: Sara Zamani, Mojtaba Fazeli, Abdollah Rashidi Mehrabadi

Abstract:

In a world worried about water resources with the shadow of drought and famine looming all around, the quality of water is as important as its quantity. The source of all concerns is the constant reduction of per capita quality water for different uses. Iran With an average annual precipitation of 250 mm compared to the 800 mm world average, Iran is considered a water scarce country and the disparity in the rainfall distribution, the limitations of renewable resources and the population concentration in the margins of desert and water scarce areas have intensified the problem. The shortage of per capita renewable freshwater and its poor quality in large areas of the country, which have saline, brackish or hard water resources, and the profusion of natural and artificial pollutant have caused the deterioration of water quality. Among methods of treatment and use of these waters one can refer to the application of membrane technologies, which have come into focus in recent years due to their great advantages. This process is quite efficient in eliminating multi-capacity ions; and due to the possibilities of production at different capacities, application as treatment process in points of use, and the need for less energy in comparison to Reverse Osmosis processes, it can revolutionize the water and wastewater sector in years to come. The article studied the different capacities of water resources in the Persian Gulf and Oman Sea watershed basins, and processes the possibility of using nanofiltration process to treat brackish and non-conventional waters in these basins.

Keywords: Membrane processes, saline waters, brackish waters, hard waters, zoning water quality in the Persian Gulf and the Oman Sea Watershed area, nanofiltration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
140 Metal Inert Gas Welding-Based-Shaped Metal Deposition in Additive Layered Manufacturing: A Review

Authors: Adnan A. Ugla, Hassan J. Khaudair, Ahmed R. J. Almusawi

Abstract:

Shaped Metal Deposition (SMD) in additive layered manufacturing technique is a promising alternative to traditional manufacturing used for manufacturing large, expensive metal components with complex geometry in addition to producing free structures by building materials in a layer by layer technique. The present paper is a comprehensive review of the literature and the latest rapid manufacturing technologies of the SMD technique. The aim of this paper is to comprehensively review the most prominent facts that researchers have dealt with in the SMD techniques especially those associated with the cold wire feed. The intent of this study is to review the literature presented on metal deposition processes and their classifications, including SMD process using Wire + Arc Additive Manufacturing (WAAM) which divides into wire + tungsten inert gas (TIG), metal inert gas (MIG), or plasma. This literary research presented covers extensive details on bead geometry, process parameters and heat input or arc energy resulting from the deposition process in both cases MIG and Tandem-MIG in SMD process. Furthermore, SMD may be done using Single Wire-MIG (SW-MIG) welding and SMD using Double Wire-MIG (DW-MIG) welding. The present review shows that the method of deposition of metals when using the DW-MIG process can be considered a distinctive and low-cost method to produce large metal components due to high deposition rates as well as reduce the input of high temperature generated during deposition and reduce the distortions. However, the accuracy and surface finish of the MIG-SMD are less as compared to electron and laser beam.

Keywords: Shaped metal deposition, additive manufacturing, double-wire feed, cold feed wire.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
139 Persuasive Communication on Social Egg Freezing in California from a Framing Theory Perspective

Authors: Leila Mohammadi

Abstract:

This paper presents the impact of persuasive communication implemented by fertility clinics websites, and how this information influences women at their decision-making for undertaking this procedure. The influential factors for women decisions to do social egg freezing (SEF) are analyzed from a framing theory perspective, with a specific focus on the impact of persuasive information on women’s decision making. This study follows a quantitative approach. A two-phase survey has been conducted to examine the interest rate to undertake SEF. In the first phase, a questionnaire was available during a month (May 2015) to women to answer whether or not they knew enough information of this process, with a total of 230 answers. The second phase took place in the two last weeks of July 2015. All the respondents were invited to a seminars called ‘All about egg freezing’ and afretwards they were requested to answer the second questionnaire. After the seminar, in which they were given an extensive amount of information about egg freezing, a total of 115 women replied the questionnaire. The collected data during this process were analyzed using descriptive statistics. Most of the respondents changed their opinion in the second questionaire which was after receiving information. Although in the first questionnaire their self-evaluation of having knowledge about this process and the implemented technologies was very high, they realized that they still need to access more information from different sources in order to be able to make a decision. The study reached the conclusion that persuasive and framed information by clinics would affect the decisions of these women. Despite the reasons women have to do egg freezing and their motivations behind it, providing people necessary information and unprejudiced data about this process (such as its positive and negative aspects, requirements, suppositions, possibilities and consequences) would help them to make a more precise and reasonable decision about what they are buying.

Keywords: Decision making, fertility clinics, framing theory, persuasive information, social egg freezing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
138 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4505
137 Identification of Factors Influencing Company's Competitiveness

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Fast development of technologies, economic globalization and many other external circumstances stimulate company’s competitiveness. One of the major trends in today’s business is the shift to the exploitation of the Internet and electronic environment for entrepreneurial needs. Latest researches confirm that e-environment provides a range of possibilities and opportunities for companies, especially for micro-, small- and medium-sized companies, which have limited resources. The usage of e-tools raises the effectiveness and the profitability of an organization, as well as its competitiveness. In the electronic market, as in the classic one, there are factors, such as globalization, development of new technology, price sensitive consumers, Internet, new distribution and communication channels that influence entrepreneurship. As a result of eenvironment development, e-commerce and e-marketing grow as well.

Objective of the paper: To describe and identify factors influencing company’s competitiveness in e-environment.

Research methodology: The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistics method, factor analysis in SPSS 20 environment, etc. The theoretical and methodological background of the research is formed by using scientific researches and publications, such as that from mass media and professional literature; statistical information from legal institutions as well as information collected by the authors during the surveying process. Research result: The authors detected and classified factors influencing competitiveness in e-environment. 

In this paper, the authors presented their findings based on theoretical, scientific, and field research. Authors have conducted a research on e-environment utilization among Latvian enterprises. 

Keywords: Competitiveness, e-environment, factors, factor analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088
136 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407
135 Quantifying the UK’s Future Thermal Electricity Generation Water Use: Regional Analysis

Authors: Daniel Murrant, Andrew Quinn, Lee Chapman

Abstract:

A growing population has led to increasing global water and energy demand. This demand, combined with the effects of climate change and an increasing need to maintain and protect the natural environment, represents a potentially severe threat to many national infrastructure systems. This has resulted in a considerable quantity of published material on the interdependencies that exist between the supply of water and the thermal generation of electricity, often known as the water-energy nexus. Focusing specifically on the UK, there is a growing concern that the future availability of water may at times constrain thermal electricity generation, and therefore hinder the UK in meeting its increasing demand for a secure, and affordable supply of low carbon electricity. To provide further information on the threat the water-energy nexus may pose to the UK’s energy system, this paper models the regional water demand of UK thermal electricity generation in 2030 and 2050. It uses the strategically important Energy Systems Modelling Environment model developed by the Energy Technologies Institute. Unlike previous research, this paper was able to use abstraction and consumption factors specific to UK power stations. It finds that by 2050 the South East, Yorkshire and Humber, the West Midlands and North West regions are those with the greatest freshwater demand and therefore most likely to suffer from a lack of resource. However, it finds that by 2050 it is the East, South West and East Midlands regions with the greatest total water (fresh, estuarine and seawater) demand and the most likely to be constrained by environmental standards.

Keywords: Water-energy nexus, water resources, abstraction, climate change, power station cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
134 Enhancing Warehousing Operations in Cold Supply Chain through the Use of IoT and LiFi Technologies

Authors: S. El-Gamal, P. Hossam, A. Abd El Aziz, R. Mahmoud, A. Hassan, D. Hilal, E. Ayman, H. Haytham, O. Khamis

Abstract:

Several concerns fall upon the supply chain especially in cold supply chains. These concerns are mainly in the distribution and storage phases. This research focuses on the storage area, which contains several activities such as the picking activity that faces a lot of obstacles and challenges. The implementation of IoT solutions enables businesses to monitor the temperature of food items, which is perhaps the most critical parameter in cold chains. Therefore, the research at hand proposes a practical solution that would help in eliminating the problems related to ineffective picking for products especially fish and seafood products by using IoT technology, most notably LiFi technology; thus, guaranteeing sufficient picking, reducing waste, and consequently lowering costs. A prototype was specially designed and examined. This research is a single case study research. Two methods of data collection were used; observation and semi-structured interviews. Semi-structured interviews were conducted with managers and a decision maker at one of the biggest retail stores Carrefour, Alexandria, Egypt to validate the problem and the proposed practical solution using IoT and LiFi technology. A total of three interviews were conducted. As a result, a SWOT analysis was achieved in order to highlight all the strengths and weaknesses of using the recommended LiFi solution in the picking process. According to the investigations, it was found that, the use of IoT and LiFi technology is cost effective, efficient, and reduces human errors, minimizes the percentage of product waste and thus saves money and cost. Therefore, increasing customer satisfaction and profits could be achieved.

Keywords: Cold supply chain, IoT, LiFi, warehousing operation, picking process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
133 An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Authors: S. Huber, B. Schnalzer, B. Alcalde, S. Hanke, L. Mpaltadoros, T. G. Stavropoulos, S. Nikolopoulos, I. Kompatsiaris, L. Pérez-Breva, V. Rodrigo-Casares, J. Fons-Martínez, J. de Bruin

Abstract:

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits, and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depending on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

Keywords: architectures and frameworks for health informatics systems, clinical trials, information and communications technology, remote decentralized clinical trials, technology availability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
132 Corrosion Analysis and Interfacial Characterization of Al – Steel Metal Inert Gas Weld - Braze Dissimilar Joints by Micro Area X-Ray Diffraction Technique

Authors: S. S. Sravanthi, Swati Ghosh Acharyya

Abstract:

Automotive light weighting is of major prominence in the current times due to its contribution in improved fuel economy and reduced environmental pollution. Various arc welding technologies are being employed in the production of automobile components with reduced weight. The present study is of practical importance since it involves preferential substitution of Zinc coated mild steel with a light weight alloy such as 6061 Aluminium by means of Gas Metal Arc Welding (GMAW) – Brazing technique at different processing parameters. However, the fabricated joints have shown the generation of Al – Fe layer at the interfacial regions which was confirmed by the Scanning Electron Microscope and Energy Dispersion Spectroscopy. These Al-Fe compounds not only affect the mechanical strength, but also predominantly deteriorate the corrosion resistance of the joints. Hence, it is essential to understand the phases formed in this layer and their crystal structure. Micro area X - ray diffraction technique has been exclusively used for this study. Moreover, the crevice corrosion analysis at the joint interfaces was done by exposing the joints to 5 wt.% FeCl3 solution at regular time intervals as per ASTM G 48-03. The joints have shown a decreased crevice corrosion resistance with increased heat intensity. Inner surfaces of welds have shown severe oxide cracking and a remarkable weight loss when exposed to concentrated FeCl3. The weight loss was enhanced with decreased filler wire feed rate and increased heat intensity. 

Keywords: Automobiles, welding, corrosion, lap joints, Micro XRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
131 Exergy Based Performance Analysis of a Gas Turbine Unit at Various Ambient Conditions

Authors: Idris A. Elfeituri

Abstract:

This paper studies the effect of ambient conditions on the performance of a 285 MW gas turbine unit using the exergy concept. Based on the available exergy balance models developed, a computer program has been constructed to investigate the performance of the power plant under varying ambient temperature and relative humidity conditions. The variations of ambient temperature range from zero to 50 ºC and the relative humidity ranges from zero to 100%, while the unit load kept constant at 100% of the design load. The exergy destruction ratio and exergy efficiency are determined for each component and for the entire plant. The results show a moderate increase in the total exergy destruction ratio of the plant from 62.05% to 65.20%, while the overall exergy efficiency decrease from 38.2% to 34.8% as the ambient temperature increases from zero to 50 ºC at all relative humidity values. Furthermore, an increase of 1 ºC in ambient temperature leads to 0.063% increase in the total exergy destruction ratio and 0.07% decrease in the overall exergy efficiency. The relative humidity has a remarkable influence at higher ambient temperature values on the exergy destruction ratio of combustion chamber and on exergy loss ratio of the exhaust gas but almost no effect on the total exergy destruction ratio and overall exergy efficiency. At 50 ºC ambient temperature, the exergy destruction ratio of the combustion chamber increases from 30% to 52% while the exergy loss ratio of the exhaust gas decreases from 28% to 8% as the relative humidity increases from zero to 100%. In addition, exergy analysis reveals that the combustion chamber and exhaust gas are the main source of irreversibility in the gas turbine unit. It is also identified that the exergy efficiency and exergy destruction ratio are considerably dependent on the variations in the ambient air temperature and relative humidity. Therefore, the incorporation of the existing gas turbine plant with inlet air cooling and humidifier technologies should be considered seriously.

Keywords: Destruction, exergy, gas turbine, irreversibility, performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 898
130 Improving Health Care and Patient Safety at the ICU by Using Innovative Medical Devices and ICT Tools: Examples from Bangladesh

Authors: Mannan Mridha, Mohammad S. Islam

Abstract:

Innovative medical technologies offer more effective medical care, with less risk to patient and healthcare personnel. Medical technology and devices when properly used provide better data, precise monitoring and less invasive treatments and can be more targeted and often less costly. The Intensive Care Unit (ICU) equipped with patient monitoring, respiratory and cardiac support, pain management, emergency resuscitation and life support devices is particularly prone to medical errors for various reasons. Many people in the developing countries now wonder whether their visit to hospital might harm rather than help them. This is because; clinicians in the developing countries are required to maintain an increasing workload with limited resources and absence of well-functioning safety system. A team of experts from the medical, biomedical and clinical engineering in Sweden and Bangladesh have worked together to study the incidents, adverse events at the ICU in Bangladesh. The study included both public and private hospitals to provide a better understanding for physical structure, organization and practice in operating processes of care, and the occurrence of adverse outcomes the errors, risks and accidents related to medical devices at the ICU, and to develop a ICT based support system in order to reduce hazards and errors and thus improve the quality of performance, care and cost effectiveness at the ICU. Concrete recommendations and guidelines have been made for preparing appropriate ICT related tools and methods for improving the routine for use of medical devices, reporting and analyzing of the incidents at the ICU in order to reduce the number of undetected and unsolved incidents and thus improve the patient safety.

Keywords: Accidents reporting system, patient car and safety, safe medical devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
129 Mnemotopic Perspectives: Communication Design as Stabilizer for the Memory of Places

Authors: C. Galasso

Abstract:

The ancestral relationship between humans and geographical environment has long been at the center of an interdisciplinary dialogue, which sees one of its main research nodes in the relationship between memory and places. Given its deep complexity, this symbiotic connection continues to look for a proper definition that appears increasingly negotiated by different disciplines. Numerous fields of knowledge are involved, from anthropology to semiotics of space, from photography to architecture, up to subjects traditionally far from these reasonings. This is the case of Design of Communication, a young discipline, now confident in itself and its objectives, aimed at finding and investigating original forms of visualization and representation, between sedimented knowledge and new technologies. In particular, Design of Communication for the Territory offers an alternative perspective to the debate, encouraging the reactivation and reconstruction of the memory of places. Recognizing mnemotopes as a cultural object of vertical interpretation of the memory-place relationship, design can become a real mediator of the territorial fixation of memories, making them increasingly accessible and perceptible, contributing to build a topography of memory. According to a mnemotopic vision, Communication Design can support the passage from a memory in which the observer participates only as an individual to a collective form of memory. A mnemotopic form of Communication Design can, through geolocation and content map-based systems, make chronology a topography rooted in the territory and practicable; it can be useful to understand how the perception of the memory of places changes over time, considering how to insert them in the contemporary world. Mnemotopes can be materialized in different format of translation, editing and narration and then involved in complex systems of communication. The memory of places, therefore, if stabilized by the tools offered by Communication Design, can make visible ruins and territorial stratifications, illuminating them with new communicative interests that can be shared and participated.

Keywords: Memory of places, design of communication, territory, mnemotope, topography of memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809