Search results for: wireless mesh network (WMN)
1647 To Know the Way to the Unknown: A Semi-Experimental Study on the Implication of Skills and Knowledge for Creative Processes in Higher Education
Authors: Mikkel Snorre Wilms Boysen
Abstract:
From a theoretical perspective, expertise is generally considered a precondition for creativity. The assumption is that an individual needs to master the common and accepted rules and techniques within a certain knowledge-domain in order to create something new and valuable. However, real life cases, and a limited amount of empirical studies, demonstrate that this assumption may be overly simple. In this article, this question is explored through a number of semi-experimental case studies conducted within the fields of music, technology, and youth culture. The studies indicate that, in various ways, expertise plays an important part in creative processes. However, the case studies also indicate that expertise sometimes leads to an entrenched perspective, in the sense that knowledge and experience may work as a path into the well-known rather than into the unknown. In this article, these issues are explored with reference to different theoretical approaches to creativity and learning, including actor-network theory, the theory of blind variation and selective retention, and Csikszentmihalyi’s system model. Finally, some educational aspects and implications of this are discussed.Keywords: creativity, expertise , education, technology
Procedia PDF Downloads 3231646 Impact Evaluation of Discriminant Analysis on Epidemic Protocol in Warships’s Scenarios
Authors: Davi Marinho de Araujo Falcão, Ronaldo Moreira Salles, Paulo Henrique Maranhão
Abstract:
Disruption Tolerant Networks (DTN) are an evolution of Mobile Adhoc Networks (MANET) and work good in scenarioswhere nodes are sparsely distributed, with low density, intermittent connections and an end-to-end infrastructure is not possible to guarantee. Therefore, DTNs are recommended for high latency applications that can last from hours to days. The maritime scenario has mobility characteristics that contribute to a DTN network approach, but the concern with data security is also a relevant aspect in such scenarios. Continuing the previous work, which evaluated the performance of some DTN protocols (Epidemic, Spray and Wait, and Direct Delivery) in three warship scenarios and proposed the application of discriminant analysis, as a classification technique for secure connections, in the Epidemic protocol, thus, the current article proposes a new analysis of the directional discriminant function with opening angles smaller than 90 degrees, demonstrating that the increase in directivity influences the selection of a greater number of secure connections by the directional discriminant Epidemic protocol.Keywords: DTN, discriminant function, epidemic protocol, security, tactical messages, warship scenario
Procedia PDF Downloads 1931645 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification
Authors: Zhaoxin Luo, Michael Zhu
Abstract:
In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese
Procedia PDF Downloads 701644 Accurate Energy Assessment Technique for Mine-Water District Heat Network
Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones
Abstract:
UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.Keywords: heat demand, heat pump, renewable energy, retrofit
Procedia PDF Downloads 951643 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 1671642 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System
Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli
Abstract:
This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.Keywords: feature selection, genetic algorithm, optimization, wood recognition system
Procedia PDF Downloads 5481641 Development and Analysis of Multigeneration System by Using Combined Solar and Geothermal Energy Resources
Authors: Muhammad Umar Khan, Mahesh Kumar, Faraz Neakakhtar
Abstract:
Although industrialization marks to the economy of a country yet it increases the pollution and temperature of the environment. The world is now shifting towards green energy because the utilization of fossil fuels is resulting in global warming. So we need to develop systems that can operate on renewable energy resources and have low heat losses. The combined solar and geothermal multigeneration system can solve this issue. Rather than making rankine cycle purely a solar-driven, heat from solar is used to drive vapour absorption cycle and reheated to generate power using geothermal reservoir. The results are displayed by using Engineering Equation Solver software, where inputs are varied to optimize the energy and exergy efficiencies of the system. The cooling effect is 348.2 KW, while the network output is 23.8 MW and reducing resultant emission of 105553 tons of CO₂ per year. This eco-friendly multigeneration system is capable of eliminating the use of fossil fuels and increasing the geothermal energy efficiency.Keywords: cooling effect, eco-friendly, green energy, heat loses, multigeneration system, renewable energy, work output
Procedia PDF Downloads 2671640 Rheological and Microstructural Characterization of Concentrated Emulsions Prepared by Fish Gelatin
Authors: Helen S. Joyner (Melito), Mohammad Anvari
Abstract:
Concentrated emulsions stabilized by proteins are systems of great importance in food, pharmaceutical and cosmetic products. Controlling emulsion rheology is critical for ensuring desired properties during formation, storage, and consumption of emulsion-based products. Studies on concentrated emulsions have focused on rheology of monodispersed systems. However, emulsions used for industrial applications are polydispersed in nature, and this polydispersity is regarded as an important parameter that also governs the rheology of the concentrated emulsions. Therefore, the objective of this study was to characterize rheological (small and large deformation behaviors) and microstructural properties of concentrated emulsions which were not truly monodispersed as usually encountered in food products such as margarines, mayonnaise, creams, spreads, and etc. The concentrated emulsions were prepared at different concentrations of fish gelatin (0.2, 0.4, 0.8% w/v in the whole emulsion system), oil-water ratio 80-20 (w/w), homogenization speed 10000 rpm, and 25oC. Confocal laser scanning microscopy (CLSM) was used to determine the microstructure of the emulsions. To prepare samples for CLSM analysis, FG solutions were stained by Fluorescein isothiocyanate dye. Emulsion viscosity profiles were determined using shear rate sweeps (0.01 to 100 1/s). The linear viscoelastic regions (LVRs) of the emulsions were determined using strain sweeps (0.01 to 100% strain) for each sample. Frequency sweeps were performed in the LVR (0.1% strain) from 0.6 to 100 rad/s. Large amplitude oscillatory shear (LAOS) testing was conducted by collecting raw waveform data at 0.05, 1, 10, and 100% strain at 4 different frequencies (0.5, 1, 10, and 100 rad/s). All measurements were performed in triplicate at 25oC. The CLSM results revealed that increased fish gelatin concentration resulted in more stable oil-in-water emulsions with homogeneous, finely dispersed oil droplets. Furthermore, the protein concentration had a significant effect on emulsion rheological properties. Apparent viscosity and dynamic moduli at small deformations increased with increasing fish gelatin concentration. These results were related to increased inter-droplet network connections caused by increased fish gelatin adsorption at the surface of oil droplets. Nevertheless, all samples showed shear-thinning and weak gel behaviors over shear rate and frequency sweeps, respectively. Lissajous plots, or plots of stress versus strain, and phase lag values were used to determine nonlinear behavior of the emulsions in LAOS testing. Greater distortion in the elliptical shape of the plots followed by higher phase lag values was observed at large strains and frequencies in all samples, indicating increased nonlinear behavior. Shifts from elastic-dominated to viscous dominated behavior were also observed. These shifts were attributed to damage to the sample microstructure (e.g. gel network disruption), which would lead to viscous-type behaviors such as permanent deformation and flow. Unlike the small deformation results, the LAOS behavior of the concentrated emulsions was not dependent on fish gelatin concentration. Systems with different microstructures showed similar nonlinear viscoelastic behaviors. The results of this study provided valuable information that can be used to incorporate concentrated emulsions in emulsion-based food formulations.Keywords: concentrated emulsion, fish gelatin, microstructure, rheology
Procedia PDF Downloads 2761639 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence
Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar
Abstract:
This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves
Procedia PDF Downloads 1971638 Malaria Parasite Detection Using Deep Learning Methods
Authors: Kaustubh Chakradeo, Michael Delves, Sofya Titarenko
Abstract:
Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required.Keywords: convolution neural network, deep learning, malaria, thin blood smears
Procedia PDF Downloads 1321637 Cytochrome B Marker Reveals Three Distinct Genetic Lineages of the Oriental Latrine Fly Chrysomya megacephala (Diptera: Calliphoridae) in Malaysia
Authors: Rajagopal Kavitha, Van Lun Low, Mohd Sofian-Azirun, Chee Dhang Chen, Mohd Yusof Farida Zuraina, Mohd Salleh Ahmad Firdaus, Navaratnam Shanti, Abdul Haiyee Zaibunnisa
Abstract:
This study investigated the hidden genetic lineages in the oriental latrine fly Chrysomya megacephala (Fabricius) across four states (i.e., Johore, Pahang, Perak and Selangor) and a federal territory (i.e., Kuala Lumpur) in Malaysia using Cytochrome b (Cyt b) genetic marker. The Cyt b phylogenetic tree and haplotype network revealed three distinct genetic lineages of Ch. megacephala. Lineage A, the basal clade was restricted to flies that originated from Kuala Lumpur and Selangor, while Lineages B and C, comprised of flies from all studied populations. An overlap of the three genetically divergent groups of Ch. megacephala was observed. However, the flies from both Kuala Lumpur and Selangor populations consisted of three different lineages, indicating that they are genetically diverse compared to those from Pahang, Perak and Johore.Keywords: forensic entomology, calliphoridae, mitochondrial DNA, cryptic lineage
Procedia PDF Downloads 5141636 Enhanced Iceberg Information Dissemination for Public and Autonomous Maritime Use
Authors: Ronald Mraz, Gary C. Kessler, Ethan Gold, John G. Cline
Abstract:
The International Ice Patrol (IIP) continually monitors iceberg activity in the North Atlantic by direct observation using ships, aircraft, and satellite imagery. Daily reports detailing navigational boundaries of icebergs have significantly reduced the risk of iceberg contact. What is currently lacking is formatting this data for automatic transmission and display of iceberg navigational boundaries in commercial navigation equipment. This paper describes the methodology and implementation of a system to format iceberg limit information for dissemination through existing radio network communications. This information will then automatically display on commercial navigation equipment. Additionally, this information is reformatted for Google Earth rendering of iceberg track line limits. Having iceberg limit information automatically available in standard navigation equipment will help support full autonomous operation of sailing vessels.Keywords: iceberg, iceberg risk, iceberg track lines, AIS messaging, international ice patrol, North American ice service, google earth, autonomous surface vessels
Procedia PDF Downloads 1391635 A Textile-Based Scaffold for Skin Replacements
Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel
Abstract:
The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization
Procedia PDF Downloads 2571634 Volunteering and Social Integration of Ex-Soviet Immigrants in Israel
Authors: Natalia Khvorostianov, Larissa Remennick
Abstract:
Recent immigrants seldom join the ranks of volunteers for various social causes. This gap reflects both material reasons (immigrants’ lower income and lack of free time) and cultural differences (value systems, religiosity, language barrier, attitudes towards host society, etc.). Immigrants from the former socialist countries are particularly averse to organized forms of volunteering for a host of reasons rooted in their past, including the memories of false or forced forms of collectivism imposed by the state. In this qualitative study, based on 21 semi-structured interviews, we explored the perceptions and practices of volunteer work among FSU immigrants - participants in one volunteering project run by an Israeli NGO for the benefit of elderly ex-Soviet immigrants. Our goal was to understand the motivations of immigrant volunteers and the role of volunteering in the processes of their own social and economic integration in their adopted country – Israel. The results indicate that most volunteers chose causes targeting fellow immigrants, their resettlement and well-being, and were motivated by the wish to build co-ethnic support network and overcome marginalization in the Israeli society. Other volunteers were driven by the need for self-actualization in the context of underemployment and occupational downgrading.Keywords: FSU immigrants, integration, volunteering, participation, social capital
Procedia PDF Downloads 3941633 Geoecological Problems of Karst Waters in Chiatura Municipality, Georgia
Authors: Liana Khandolishvili, Giorgi Dvalashvili
Abstract:
Karst waters in the world play an important role in the water supply. Among them, the Vaucluse in Chiatura municipality (Georgia) is used as drinking water and is irreplaceable for the local population. Accordingly, it is important to assess their geo-ecological conditions and take care to maintain sustainability. The aim of the paper is to identify the hazards of pollution of underground waters in the karst environment and to develop a scheme for their protection, which will take into consideration both the hydrogeological characteristics and the role of humans. To achieve this goal, the EPIK method was selected using which an epikarst zone of the study area was studied in detail, as well as the protective cover, infiltration conditions and frequency of karst network development, after which the conditions of karst waters in Chiatura municipality was assessed, their main pollutants were identified and the recommendations were prepared for their protection. The results of the study showed that the karst water pollution rate in Chiatura municipality is highest, where karst-fissured layers are represented and intensive extraction works are underway. The EPIK method is innovative in Georgia and was first introduced on the example of karst waters of Chiatura municipality.Keywords: cave, EPIK method, pollution, Karst waters, geology, geography, ecology
Procedia PDF Downloads 951632 Effect of Ti+ Irradiation on the Photoluminescence of TiO2 Nanofibers
Authors: L. Chetibi, D. Hamana, T. O. Busko, M. P. Kulish, S. Achour
Abstract:
TiO2 nanostructures have attracted much attention due to their optical, dielectric and photocatalytic properties as well as applications including optical coating, photocatalysis and photoelectrochemical solar cells. This work aims to prepare TiO2 nanofibers (NFs) on titanium substrate (Ti) by in situ oxidation of Ti foils in a mixture solution of concentrated H2O2 and NaOH followed by proton exchange and calcinations. Scanning Electron microscopy (SEM) revealed an obvious network of TiO2 nanofibers. The photoluminescence (PL) spectra of these nanostructures revealed a broad intense band in the visible light range with a reduced near edge band emission. The PL bands in the visible region, mainly, results from surface oxygen vacancies and others defects. After irradiation with Ti+ ions (the irradiation energy was E = 140 keV with doses of 1013 ions/cm2), the intensity of the PL spectrum decreased as a consequence of the radiation treatment. The irradiation with Ti+ leads to a reduction of defects and generation of non irradiative defects near to the level of the conduction band as evidenced by the PL results. On the other hand, reducing the surface defects on TiO2 nanostructures may improve photocatalytic and optoelectronic properties of this nanostructure.Keywords: TiO2, nanofibers, photoluminescence, irradiation
Procedia PDF Downloads 2461631 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town
Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid
Abstract:
Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method
Procedia PDF Downloads 1691630 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1781629 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 1511628 Survey on Fiber Optic Deployment for Telecommunications Operators in Ghana: Coverage Gap, Recommendations and Research Directions
Authors: Francis Padi, Solomon Nunoo, John Kojo Annan
Abstract:
The paper "Survey on Fiber Optic Deployment for Telecommunications Operators in Ghana: Coverage Gap, Recommendations and Research Directions" presents a comprehensive survey on the deployment of fiber optic networks for telecommunications operators in Ghana. It addresses the challenges encountered by operators using microwave transmission systems for backhauling traffic and emphasizes the advantages of deploying fiber optic networks. The study delves into the coverage gap, provides recommendations, and outlines research directions to enhance the telecommunications infrastructure in Ghana. Additionally, it evaluates next-generation optical access technologies and architectures tailored to operators' needs. The paper also investigates current technological solutions and regulatory, technical, and economical dimensions related to sharing mobile telecommunication networks in emerging countries. Overall, this paper offers valuable insights into fiber optic network deployment for telecommunications operators in Ghana and suggests strategies to meet the increasing demand for data and mobile applications.Keywords: survey on fiber optic deployment, coverage gap, recommendations, research directions
Procedia PDF Downloads 241627 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature
Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon
Abstract:
Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.Keywords: deep-learning, altimetry, sea surface temperature, forecast
Procedia PDF Downloads 911626 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 1171625 Optimizing Operation of Photovoltaic System Using Neural Network and Fuzzy Logic
Authors: N. Drir, L. Barazane, M. Loudini
Abstract:
It is well known that photovoltaic (PV) cells are an attractive source of energy. Abundant and ubiquitous, this source is one of the important renewable energy sources that have been increasing worldwide year by year. However, in the V-P characteristic curve of GPV, there is a maximum point called the maximum power point (MPP) which depends closely on the variation of atmospheric conditions and the rotation of the earth. In fact, such characteristics outputs are nonlinear and change with variations of temperature and irradiation, so we need a controller named maximum power point tracker MPPT to extract the maximum power at the terminals of photovoltaic generator. In this context, the authors propose here to study the modeling of a photovoltaic system and to find an appropriate method for optimizing the operation of the PV generator using two intelligent controllers respectively to track this point. The first one is based on artificial neural networks and the second on fuzzy logic. After the conception and the integration of each controller in the global process, the performances are examined and compared through a series of simulation. These two controller have prove by their results good tracking of the MPPT compare with the other method which are proposed up to now.Keywords: maximum power point tracking, neural networks, photovoltaic, P&O
Procedia PDF Downloads 3401624 Distributed Multi-Agent Based Approach on Intelligent Transportation Network
Authors: Xiao Yihong, Yu Kexin, Burra Venkata Durga Kumar
Abstract:
With the accelerating process of urbanization, the problem of urban road congestion is becoming more and more serious. Intelligent transportation system combining distributed and artificial intelligence has become a research hotspot. As the core development direction of the intelligent transportation system, Cooperative Intelligent Transportation System (C-ITS) integrates advanced information technology and communication methods and realizes the integration of humans, vehicle, roadside infrastructure, and other elements through the multi-agent distributed system. By analyzing the system architecture and technical characteristics of C-ITS, the report proposes a distributed multi-agent C-ITS. The system consists of Roadside Sub-system, Vehicle Sub-system, and Personal Sub-system. At the same time, we explore the scalability of the C-ITS and put forward incorporating local rewards in the centralized training decentralized execution paradigm, hoping to add a scalable value decomposition method. In addition, we also suggest introducing blockchain to improve the safety of the traffic information transmission process. The system is expected to improve vehicle capacity and traffic safety.Keywords: distributed system, artificial intelligence, multi-agent, cooperative intelligent transportation system
Procedia PDF Downloads 2151623 Developing an Accurate AI Algorithm for Histopathologic Cancer Detection
Authors: Leah Ning
Abstract:
This paper discusses the development of a machine learning algorithm that accurately detects metastatic breast cancer (cancer has spread elsewhere from its origin part) in selected images that come from pathology scans of lymph node sections. Being able to develop an accurate artificial intelligence (AI) algorithm would help significantly in breast cancer diagnosis since manual examination of lymph node scans is both tedious and oftentimes highly subjective. The usage of AI in the diagnosis process provides a much more straightforward, reliable, and efficient method for medical professionals and would enable faster diagnosis and, therefore, more immediate treatment. The overall approach used was to train a convolution neural network (CNN) based on a set of pathology scan data and use the trained model to binarily classify if a new scan were benign or malignant, outputting a 0 or a 1, respectively. The final model’s prediction accuracy is very high, with 100% for the train set and over 70% for the test set. Being able to have such high accuracy using an AI model is monumental in regard to medical pathology and cancer detection. Having AI as a new tool capable of quick detection will significantly help medical professionals and patients suffering from cancer.Keywords: breast cancer detection, AI, machine learning, algorithm
Procedia PDF Downloads 921622 3D Electromagnetic Mapping of the Signal Strength in Long Term Evolution Technology in the Livestock Department of ESPOCH
Authors: Cinthia Campoverde, Mateo Benavidez, Victor Arias, Milton Torres
Abstract:
This article focuses on the 3D electromagnetic mapping of the intensity of the signal received by a mobile antenna within the open areas of the Department of Livestock of the Escuela Superior Politecnica de Chimborazo (ESPOCH), located in the city of Riobamba, Ecuador. The transmitting antenna belongs to the mobile telephone company ”TUENTI”, and is analyzed in the 2 GHz bands, operating at a frequency of 1940 MHz, using Long Term Evolution (LTE). Power signal strength data in the area were measured empirically using the ”Network Cell Info” application. A total of 170 samples were collected, distributed in 19 concentric circles around the base station. 3 campaigns were carried out at the same time, with similar traffic, and average values were obtained at each point, which varies between -65.33 dBm to -101.67 dBm. Also, the two virtualization software used are Sketchup and Unreal. Finally, the virtualized environment was visualized through virtual reality using Oculus 3D glasses, where the power levels are displayed according to a range of powers.Keywords: reception power, LTE technology, virtualization, virtual reality, power levels
Procedia PDF Downloads 951621 A Web-Based Self-Learning Grammar for Spoken Language Understanding
Authors: S. Biondi, V. Catania, R. Di Natale, A. R. Intilisano, D. Panno
Abstract:
One of the major goals of Spoken Dialog Systems (SDS) is to understand what the user utters. In the SDS domain, the Spoken Language Understanding (SLU) Module classifies user utterances by means of a pre-definite conceptual knowledge. The SLU module is able to recognize only the meaning previously included in its knowledge base. Due the vastity of that knowledge, the information storing is a very expensive process. Updating and managing the knowledge base are time-consuming and error-prone processes because of the rapidly growing number of entities like proper nouns and domain-specific nouns. This paper proposes a solution to the problem of Name Entity Recognition (NER) applied to a SDS domain. The proposed solution attempts to automatically recognize the meaning associated with an utterance by using the PANKOW (Pattern based Annotation through Knowledge On the Web) method at runtime. The method being proposed extracts information from the Web to increase the SLU knowledge module and reduces the development effort. In particular, the Google Search Engine is used to extract information from the Facebook social network.Keywords: spoken dialog system, spoken language understanding, web semantic, name entity recognition
Procedia PDF Downloads 3391620 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 1351619 Relocation of the Air Quality Monitoring Stations Network for Aburrá Valley Based on Local Climatic Zones
Authors: Carmen E. Zapata, José F. Jiménez, Mauricio Ramiréz, Natalia A. Cano
Abstract:
The majority of the urban areas in Latin America face the challenges associated with city planning and development problems, attributed to human, technical, and economical factors; therefore, we cannot ignore the issues related to climate change because the city modifies the natural landscape in a significant way transforming the radiation balance and heat content in the urbanized areas. These modifications provoke changes in the temperature distribution known as “the heat island effect”. According to this phenomenon, we have the need to conceive the urban planning based on climatological patterns that will assure its sustainable functioning, including the particularities of the climate variability. In the present study, it is identified the Local Climate Zones (LCZ) in the Metropolitan Area of the Aburrá Valley (Colombia) with the objective of relocate the air quality monitoring stations as a partial solution to the problem of how to measure representative air quality levels in a city for a local scale, but with instruments that measure in the microscale.Keywords: air quality, monitoring, local climatic zones, valley, monitoring stations
Procedia PDF Downloads 2741618 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 133