Search results for: wireless sensor network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6193

Search results for: wireless sensor network

2683 Mercaptopropionic Acid (MPA) Modifying Chitosan-Gold Nano Composite for γ-Aminobutyric Acid Analysis Using Raman Scattering

Authors: Bingjie Wang, Su-Yeon Kwon, Ik-Joong Kang

Abstract:

The goal of this experiment is to develop a sensor that can quickly check the concentration by using the nanoparticles made by chitosan and gold. Using chitosan nanoparticles crosslinking with sodium tripolyphosphate(TPP) is the first step to form the chitosan nanoparticles, which would be covered with the gold sequentially. The size of the fabricated product was around 100nm. Based on the method that the sulfur end of the MPA linked to gold can form the very strong S–Au bond, and the carboxyl group, the other end of the MPA, can easily absorb the GABA. As for the GABA, what is the primary inhibitory neurotransmitter in the mammalian central nervous system in the human body. It plays such significant role in reducing neuronal excitability pass through the nervous system. A Surface-enhanced Raman Scattering (SERS) as the principle for enhancing Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures is used to detect the concentration change of γ-Aminobutyric Acid (GABA). When the system is formed, it generated SERS, which made a clear difference in the intensity of Raman scattering within the range of GABA concentration. So it is obtained from the experiment that the calibration curve according to the GABA concentration relevant with the SERS scattering. In this study, DLS, SEM, FT-IR, UV, SERS were used to analyze the products to obtain the conclusion.

Keywords: mercaptopropionic acid, chitosan-gold nanoshell, γ-aminobutyric acid, surface-enhanced raman scattering

Procedia PDF Downloads 277
2682 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 123
2681 Development of an Improved Paradigm for the Tourism Sector in the Department of Huila, Colombia: A Theoretical and Empirical Approach

Authors: Laura N. Bolivar T.

Abstract:

The tourism importance for regional development is mainly highlighted by the collaborative, cooperating and competitive relationships of the involved agents. The fostering of associativity processes, in particular, the cluster approach emphasizes the beneficial outcomes from the concentration of enterprises, where innovation and entrepreneurship flourish and shape the dynamics for tourism empowerment. Considering the department of Huila, it is located in the south-west of Colombia and holds the biggest coffee production in the country, although it barely contributes to the national GDP. Hence, its economic development strategy is looking for more dynamism and Huila could be consolidated as a leading destination for cultural, ecological and heritage tourism, if at least the public policy making processes for the tourism management of La Tatacoa Desert, San Agustin Park and Bambuco’s National Festival, were implemented in a more efficient manner. In this order of ideas, this study attempts to address the potential restrictions and beneficial factors for the consolidation of the tourism sector of Huila-Colombia as a cluster and how could it impact its regional development. Therefore, a set of theoretical frameworks such as the Tourism Routes Approach, the Tourism Breeding Environment, the Community-based Tourism Method, among others, but also a collection of international experiences describing tourism clustering processes and most outstanding problematics, is analyzed to draw up learning points, structure of proceedings and success-driven factors to be contrasted with the local characteristics in Huila, as the region under study. This characterization involves primary and secondary information collection methods and comprises the South American and Colombian context together with the identification of involved actors and their roles, main interactions among them, major tourism products and their infrastructure, the visitors’ perspective on the situation and a recap of the related needs and benefits regarding the host community. Considering the umbrella concepts, the theoretical and the empirical approaches, and their comparison with the local specificities of the tourism sector in Huila, an array of shortcomings is analytically constructed and a series of guidelines are proposed as a way to overcome them and simultaneously, raise economic development and positively impact Huila’s well-being. This non-exhaustive bundle of guidelines is focused on fostering cooperating linkages in the actors’ network, dealing with Information and Communication Technologies’ innovations, reinforcing the supporting infrastructure, promoting the destinations considering the less known places as well, designing an information system enabling the tourism network to assess the situation based on reliable data, increasing competitiveness, developing participative public policy-making processes and empowering the host community about the touristic richness. According to this, cluster dynamics would drive the tourism sector to meet articulation and joint effort, then involved agents and local particularities would be adequately assisted to cope with the current changing environment of globalization and competition.

Keywords: innovative strategy, local development, network of tourism actors, tourism cluster

Procedia PDF Downloads 142
2680 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 52
2679 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 56
2678 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 43
2677 Remotely Sensed Data Fusion to Extract Vegetation Cover in the Cultural Park of Tassili, South of Algeria

Authors: Y. Fekir, K. Mederbal, M. A. Hammadouche, D. Anteur

Abstract:

The cultural park of the Tassili, occupying a large area of Algeria, is characterized by a rich vegetative biodiversity to be preserved and managed both in time and space. The management of a large area (case of Tassili), by its complexity, needs large amounts of data, which for the most part, are spatially localized (DEM, satellite images and socio-economic information etc.), where the use of conventional and traditional methods is quite difficult. The remote sensing, by its efficiency in environmental applications, became an indispensable solution for this kind of studies. Multispectral imaging sensors have been very useful in the last decade in very interesting applications of remote sensing. They can aid in several domains such as the de¬tection and identification of diverse surface targets, topographical details, and geological features. In this work, we try to extract vegetative areas using fusion techniques between data acquired from sensor on-board the Earth Observing 1 (EO-1) satellite and Landsat ETM+ and TM sensors. We have used images acquired over the Oasis of Djanet in the National Park of Tassili in the south of Algeria. Fusion technqiues were applied on the obtained image to extract the vegetative fraction of the different classes of land use. We compare the obtained results in vegetation end member extraction with vegetation indices calculated from both Hyperion and other multispectral sensors.

Keywords: Landsat ETM+, EO1, data fusion, vegetation, Tassili, Algeria

Procedia PDF Downloads 435
2676 Breaking Sensitivity Barriers: Perovskite Based Gas Sensors With Dimethylacetamide-Dimethyl Sulfoxide Solvent Mixture Strategy

Authors: Endalamaw Ewnu Kassa, Ade Kurniawan, Ya-Fen Wu, Sajal Biring

Abstract:

Perovskite-based gas sensors represent a highly promising materials within the realm of gas sensing technology, with a particular focus on detecting ammonia (NH3) due to its potential hazards. Our work conducted thorough comparison of various solvents, including dimethylformamide (DMF), DMF-dimethyl sulfoxide (DMSO), dimethylacetamide (DMAC), and DMAC-DMSO, for the preparation of our perovskite solution (MAPbI3). Significantly, we achieved an exceptional response at 10 ppm of ammonia gas by employing a binary solvent mixture of DMAC-DMSO. In contrast to prior reports that relied on single solvents for MAPbI3 precursor preparation, our approach using mixed solvents demonstrated a marked improvement in gas sensing performance. We attained enhanced surface coverage, a reduction in pinhole occurrences, and precise control over grain size in our perovskite films through the careful selection and mixtures of appropriate solvents. This study shows a promising potential of employing binary and multi-solvent mixture strategies as a means to propel advancements in gas sensor technology, opening up new opportunities for practical applications in environmental monitoring and industrial safety.

Keywords: sensors, binary solvents, ammonia, sensitivity, grain size, pinholes, surface coverage

Procedia PDF Downloads 110
2675 A Cellular-Based Structural Health Monitoring Device (HMD) Based on Cost-Effective 1-Axis Accelerometers

Authors: Chih-Hsing Lin, Wen-Ching Chen, Chih-Ting Kuo, Gang-Neng Sung, Chih-Chyau Yang, Chien-Ming Wu, Chun-Ming Huang

Abstract:

This paper proposes a cellular-based structure health monitoring device (HMD) for temporary bridge monitoring without the requirement of power line and internet service. The proposed HMD includes sensor node, power module, cellular gateway, and rechargeable batteries. The purpose of HMD focuses on short-term collection of civil infrastructure information. It achieves the features of low cost by using three 1-axis accelerometers with data synchronization problem being solved. Furthermore, instead of using data acquisition system (DAQ) sensed data is transmitted to Host through cellular gateway. Compared with 3-axis accelerometer, our proposed 1-axis accelerometers based device achieves 50.5% cost saving with high sensitivity 2000mv/g. In addition to fit different monitoring environments, the proposed system can be easily replaced and/or extended with different PCB boards, such as communication interfaces and sensors, to adapt to various applications. Therefore, with using the proposed device, the real-time diagnosis system for civil infrastructure damage monitoring can be conducted effectively.

Keywords: cellular-based structural health monitoring, cost-effective 1-axis accelerometers, short-term monitoring, structural engineering

Procedia PDF Downloads 518
2674 Observer-Based Leader-Following Consensus of Nonlinear Fractional-Order Multi-Agent Systems

Authors: Ali Afaghi, Sehraneh Ghaemi

Abstract:

The coordination of the multi-agent systems has been one of the interesting topic in recent years, because of its potential applications in many branches of science and engineering such as sensor networks, flocking, underwater vehicles and etc. In the most of the related studies, it is assumed that the dynamics of the multi-agent systems are integer-order and linear and the multi-agent systems with the fractional-order nonlinear dynamics are rarely considered. However many phenomena in nature cannot be described within integer-order and linear characteristics. This paper investigates the leader-following consensus problem for a class of nonlinear fractional-order multi-agent systems based on observer-based cooperative control. In the system, the dynamics of each follower and leader are nonlinear. For a multi-agent system with fixed directed topology firstly, an observer-based consensus protocol is proposed based on the relative observer states of neighboring agents. Secondly, based on the property of the stability theory of fractional-order system, some sufficient conditions are presented for the asymptotical stability of the observer-based fractional-order control systems. The proposed method is applied on a five-agent system with the fractional-order nonlinear dynamics and unavailable states. The simulation example shows that the proposed scenario results in the good performance and can be used in many practical applications.

Keywords: fractional-order multi-agent systems, leader-following consensus, nonlinear dynamics, directed graphs

Procedia PDF Downloads 399
2673 Artificial Neural Networks in Environmental Psychology: Application in Architectural Projects

Authors: Diego De Almeida Pereira, Diana Borchenko

Abstract:

Artificial neural networks are used for many applications as they are able to learn complex nonlinear relationships between input and output data. As the number of neurons and layers in a neural network increases, it is possible to represent more complex behaviors. The present study proposes that artificial neural networks are a valuable tool for architecture and engineering professionals concerned with understanding how buildings influence human and social well-being based on theories of environmental psychology.

Keywords: environmental psychology, architecture, neural networks, human and social well-being

Procedia PDF Downloads 499
2672 Diversity in the Community - The Disability Perspective

Authors: Sarah Reker, Christiane H. Kellner

Abstract:

From the perspective of people with disabilities, inequalities can also emerge from spatial segregation, the lack of social contacts or limited economic resources. In order to reduce or even eliminate these disadvantages and increase general well-being, community-based participation as well as decentralisation efforts within exclusively residential homes is essential. Therefore, the new research project “Index for participation development and quality of life for persons with disabilities”(TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at a large residential complex and service provider for persons with disabilities in the outskirts of Munich aims to assist the development of community-based living environments. People with disabilities should be able to participate in social life beyond the confines of the institution. Since a diverse society is a society in which different individual needs and wishes can emerge and be catered to, the ultimate goal of the project is to create an environment for all citizens–regardless of disability, age or ethnic background–that accommodates their daily activities and requirements. The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centered design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like center will be remodeled to open up the community to all people. This strategy should lead to more equal opportunities and open the way for a much more diverse community. Therefore, macro-level research questions were inspired by quality of life theory and were formulated as follows for different dimensions: •The user dimension: what needs and necessities can we identify? Are needs person-related? Are there any options to choose from? What type of quality of life can we identify? The economic dimension: what resources (both material and staff-related) are available in the region? (How) are they used? What costs (can) arise and what effects do they entail? •The environment dimension: what “environmental factors” such as access (mobility and absence of barriers) prove beneficial or impedimental? In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees with person-centered thinking). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one project more in-depth, namely “Outpatient Housing Options for Children and Teenagers with Disabilities”. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. The most central questions pertaining to this part of the research were the following: •How have the existing network relations been designed? •What meaning (or significance) does the existing service offers and structures have for the everyday life of an external residential group? These issues underpinned the environmental analyses as well as the qualitative guided interviews and qualitative network analyses we carried out.

Keywords: decentralisation, environmental analyses, outpatient housing options for children and teenagers with disabilities, qualitative network analyses

Procedia PDF Downloads 366
2671 The Novelty of Mobile Money Solution to Ghana’S Cashless Future: Opportunities, Challenges and Way Forward

Authors: Julius Y Asamoah

Abstract:

Mobile money has seen faster adoption in the decade. Its emergence serves as an essential driver of financial inclusion and an innovative financial service delivery channel, especially to the unbanked population. The rising importance of mobile money services has caught policymakers and regulators' attention, seeking to understand the many issues emerging from this context. At the same time, it is unlocking the potential of knowledge of this new technology. Regulatory responses and support are essential, requiring significant changes to current regulatory practices in Ghana. The article aims to answer the following research questions: "What risk does an unregulated mobile money service pose to consumers and the financial system? "What factors stimulate and hinder the introduction of mobile payments in developing countries? The sample size used was 250 respondents selected from the study area. The study has adopted an analytical approach comprising a combination of qualitative and quantitative data collection methods. Actor-network theory (ANT) is used as an interpretive lens to analyse this process. ANT helps analyse how actors form alliances and enrol other actors, including non-human actors (i.e. technology), to secure their interests. The study revealed that government regulatory policies impact mobile money as critical to mobile money services in developing countries. Regulatory environment should balance the needs of advancing access to finance with the financial system's stability and draw extensively from Kenya's work as the best strategies for the system's players. Thus, regulators need to address issues related to the enhancement of supportive regulatory frameworks. It recommended that the government involve various stakeholders, such as mobile phone operators. Moreover, the national regulatory authority creates a regulatory environment that promotes fair practices and competition to raise revenues to support a business-enabling environment's key pillars as infrastructure.

Keywords: actor-network theory (ANT), cashless future, Developing countries, Ghana, Mobile Money

Procedia PDF Downloads 139
2670 Virtual Science Hub: An Open Source Platform to Enrich Science Teaching

Authors: Enrique Barra, Aldo Gordillo, Juan Quemada

Abstract:

This paper presents the Virtual Science Hub platform. It is an open source platform that combines a social network, an e-learning authoring tool, a video conference service and a learning object repository for science teaching enrichment. These four main functionalities fit very well together. The platform was released in April 2012 and since then it has not stopped growing. Finally we present the results of the surveys conducted and the statistics gathered to validate this approach.

Keywords: e-learning, platform, authoring tool, science teaching, educational sciences

Procedia PDF Downloads 398
2669 Academic Staff’s Perception and Willingness to Participate in Collaborative Research: Implication for Development in Sub-Saharan Africa

Authors: Ademola Ibukunolu Atanda

Abstract:

Research undertakings are meant to proffer solutions to issues and challenges in society. This justifies the need for research in ivory towers. Multinational and non-governmental organisations, as well as foundations, commit financial resources to support research endeavours. In recent times, the direction and dimension of research undertaking encourage collaborations, whereby experts from different disciplines or specializations would bring their expertise in addressing any identified problem, whether in humanities or sciences. However, the extent to which collaborative research undertakings are perceived and embraced by academic staff would determine the impact collaborative research would have on society. To this end, this study investigated academic staff’s perception and willingness to be involved in collaborative research for the purpose of proffering solutions to societal problems. The study adopted a descriptive research design. The population comprised academic staff in southern Nigeria. The sample was drawn through a convenient sampling technique. The data were collected using a questionnaire titled “Perception and Willingness to Participate in Collaborative Research Questionnaire (PWPCRQ)’ using Google Forms. Data collected were analyzed using descriptive statistics of simple percentages, mean and charts. The findings showed that Academic Staff’s readiness to participate in collaborative research is to a great extent (89%) and they participate in collaborative research very often (51%). The Academic Staff was involved more in collaboration research among their colleagues within their universities (1.98) than participation in inter-disciplines collaboration (1.47) with their colleagues outside Nigeria. Collaborative research was perceived to impact on development (2.5). Collaborative research offers the following benefits to members’ aggregation of views, the building of an extensive network of contacts, enhancement of sharing of skills, facilitation of tackling complex problems, increased visibility of research network and citations and promotion of funding opportunities. The study concluded that Academic staff in universities in the South-West of Nigeria participate in collaborative research but with their colleagues within Nigeria rather than outside the country. Based on the findings, it was recommended that the management of universities in South-West Nigeria should encourage collaborative research with some incentives.

Keywords: collaboration, research, development, participation

Procedia PDF Downloads 65
2668 Object Negotiation Mechanism for an Intelligent Environment Using Event Agents

Authors: Chiung-Hui Chen

Abstract:

With advancements in science and technology, the concept of the Internet of Things (IoT) has gradually developed. The development of the intelligent environment adds intelligence to objects in the living space by using the IoT. In the smart environment, when multiple users share the living space, if different service requirements from different users arise, then the context-aware system will have conflicting situations for making decisions about providing services. Therefore, the purpose of establishing a communication and negotiation mechanism among objects in the intelligent environment is to resolve those service conflicts among users. This study proposes developing a decision-making methodology that uses “Event Agents” as its core. When the sensor system receives information, it evaluates a user’s current events and conditions; analyses object, location, time, and environmental information; calculates the priority of the object; and provides the user services based on the event. Moreover, when the event is not single but overlaps with another, conflicts arise. This study adopts the “Multiple Events Correlation Matrix” in order to calculate the degree values of incidents and support values for each object. The matrix uses these values as the basis for making inferences for system service, and to further determine appropriate services when there is a conflict.

Keywords: internet of things, intelligent object, event agents, negotiation mechanism, degree of similarity

Procedia PDF Downloads 292
2667 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives

Authors: Chao Wang

Abstract:

Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.

Keywords: sebum layer, topical and similarity, interactive technology, image narrative

Procedia PDF Downloads 390
2666 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 164
2665 Computational Team Dynamics in Student New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.

Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams

Procedia PDF Downloads 117
2664 The Canaanite Trade Network between the Shores of the Mediterranean Sea

Authors: Doaa El-Shereef

Abstract:

The Canaanite civilization was one of the early great civilizations of the Near East, they influenced and been influenced from the civilizations of the ancient world especially the Egyptian and Mesopotamia civilizations. The development of the Canaanite trade started from the Chalcolithic Age to the Iron Age through the oldest trade route in the Middle East. This paper will focus on defining the Canaanites and from where did they come from and the meaning of the term Canaan and how the Ancient Manuscripts define the borders of the land of Canaan and this essay will describe the Canaanite trade route and their exported goods such as cedar wood, and pottery.

Keywords: archaeology, bronze age, Canaanite, colonies, Massilia, pottery, shipwreck, vineyards

Procedia PDF Downloads 202
2663 Roughness Discrimination Using Bioinspired Tactile Sensors

Authors: Zhengkun Yi

Abstract:

Surface texture discrimination using artificial tactile sensors has attracted increasing attentions in the past decade as it can endow technical and robot systems with a key missing ability. However, as a major component of texture, roughness has rarely been explored. This paper presents an approach for tactile surface roughness discrimination, which includes two parts: (1) design and fabrication of a bioinspired artificial fingertip, and (2) tactile signal processing for tactile surface roughness discrimination. The bioinspired fingertip is comprised of two polydimethylsiloxane (PDMS) layers, a polymethyl methacrylate (PMMA) bar, and two perpendicular polyvinylidene difluoride (PVDF) film sensors. This artificial fingertip mimics human fingertips in three aspects: (1) Elastic properties of epidermis and dermis in human skin are replicated by the two PDMS layers with different stiffness, (2) The PMMA bar serves the role analogous to that of a bone, and (3) PVDF film sensors emulate Meissner’s corpuscles in terms of both location and response to the vibratory stimuli. Various extracted features and classification algorithms including support vector machines (SVM) and k-nearest neighbors (kNN) are examined for tactile surface roughness discrimination. Eight standard rough surfaces with roughness values (Ra) of 50 μm, 25 μm, 12.5 μm, 6.3 μm 3.2 μm, 1.6 μm, 0.8 μm, and 0.4 μm are explored. The highest classification accuracy of (82.6 ± 10.8) % can be achieved using solely one PVDF film sensor with kNN (k = 9) classifier and the standard deviation feature.

Keywords: bioinspired fingertip, classifier, feature extraction, roughness discrimination

Procedia PDF Downloads 314
2662 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 97
2661 Prediction of Wind Speed by Artificial Neural Networks for Energy Application

Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui

Abstract:

In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.

Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed

Procedia PDF Downloads 695
2660 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 132
2659 Arduino Pressure Sensor Cushion for Tracking and Improving Sitting Posture

Authors: Andrew Hwang

Abstract:

The average American worker sits for thirteen hours a day, often with poor posture and infrequent breaks, which can lead to health issues and back problems. The Smart Cushion was created to alert individuals of their poor postures, and may potentially alleviate back problems and correct poor posture. The Smart Cushion is a portable, rectangular, foam cushion, with five strategically placed pressure sensors, that utilizes an Arduino Uno circuit board and specifically designed software, allowing it to collect data from the five pressure sensors and store the data on an SD card. The data is then compiled into graphs and compared to controlled postures. Before volunteers sat on the cushion, their levels of back pain were recorded on a scale from 1-10. Data was recorded for an hour during sitting, and then a new, corrected posture was suggested. After using the suggested posture for an hour, the volunteers described their level of discomfort on a scale from 1-10. Different patterns of sitting postures were generated that were able to serve as early warnings of potential back problems. By using the Smart Cushion, the areas where different volunteers were applying the most pressure while sitting could be identified, and the sitting postures could be corrected. Further studies regarding the relationships between posture and specific regions of the body are necessary to better understand the origins of back pain; however, the Smart Cushion is sufficient for correcting sitting posture and preventing the development of additional back pain.

Keywords: Arduino Sketch Algorithm, biomedical technology, pressure sensors, Smart Cushion

Procedia PDF Downloads 136
2658 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 320
2657 Thermodynamic Attainable Region for Direct Synthesis of Dimethyl Ether from Synthesis Gas

Authors: Thulane Paepae, Tumisang Seodigeng

Abstract:

This paper demonstrates the use of a method of synthesizing process flowsheets using a graphical tool called the GH-plot and in particular, to look at how it can be used to compare the reactions of a combined simultaneous process with regard to their thermodynamics. The technique uses fundamental thermodynamic principles to allow the mass, energy and work balances locate the attainable region for chemical processes in a reactor. This provides guidance on what design decisions would be best suited to developing new processes that are more effective and make lower demands on raw material and energy usage.

Keywords: attainable regions, dimethyl ether, optimal reaction network, GH Space

Procedia PDF Downloads 241
2656 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention

Authors: Lawrence Williams

Abstract:

As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.

Keywords: DNS, tunneling, exfiltration, botnet

Procedia PDF Downloads 76
2655 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms

Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna

Abstract:

In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.

Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove

Procedia PDF Downloads 304
2654 A Diagnostic Comparative Analysis of on Simultaneous Localization and Mapping (SLAM) Models for Indoor and Outdoor Route Planning and Obstacle Avoidance

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In robotics literature, the simultaneous localization and mapping (SLAM) is commonly associated with a priori-posteriori problem. The autonomous vehicle needs a neutral map to spontaneously track its local position, i.e., “localization” while at the same time a precise path estimation of the environment state is required for effective route planning and obstacle avoidance. On the other hand, the environmental noise factors can significantly intensify the inherent uncertainties in using odometry information and measurements obtained from the robot’s exteroceptive sensor which in return directly affect the overall performance of the corresponding SLAM. Therefore, the current work is primarily dedicated to provide a diagnostic analysis of six SLAM algorithms including FastSLAM, L-SLAM, GraphSLAM, Grid SLAM and DP-SLAM. A SLAM simulated environment consisting of two sets of landmark locations and robot waypoints was set based on modified EKF and UKF in MATLAB using two separate maps for indoor and outdoor route planning subject to natural and artificial obstacles. The simulation results are expected to provide an unbiased platform to compare the estimation performances of the five SLAM models as well as on the reliability of each SLAM model for indoor and outdoor applications.

Keywords: route planning, obstacle, estimation performance, FastSLAM, L-SLAM, GraphSLAM, Grid SLAM, DP-SLAM

Procedia PDF Downloads 447