Search results for: graph attention neural network
6546 Study of ANFIS and ARIMA Model for Weather Forecasting
Authors: Bandreddy Anand Babu, Srinivasa Rao Mandadi, C. Pradeep Reddy, N. Ramesh Babu
Abstract:
In this paper quickly illustrate the correlation investigation of Auto-Regressive Integrated Moving and Average (ARIMA) and daptive Network Based Fuzzy Inference System (ANFIS) models done by climate estimating. The climate determining is taken from University of Waterloo. The information is taken as Relative Humidity, Ambient Air Temperature, Barometric Pressure and Wind Direction utilized within this paper. The paper is carried out by analyzing the exhibitions are seen by demonstrating of ARIMA and ANIFIS model like with Sum of average of errors. Versatile Network Based Fuzzy Inference System (ANFIS) demonstrating is carried out by Mat lab programming and Auto-Regressive Integrated Moving and Average (ARIMA) displaying is produced by utilizing XLSTAT programming. ANFIS is carried out in Fuzzy Logic Toolbox in Mat Lab programming.Keywords: ARIMA, ANFIS, fuzzy surmising tool stash, weather forecasting, MATLAB
Procedia PDF Downloads 4196545 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 876544 Analysis of Transformer Reactive Power Fluctuations during Adverse Space Weather
Authors: Patience Muchini, Electdom Matandiroya, Emmanuel Mashonjowa
Abstract:
A ground-end manifestation of space weather phenomena is known as geomagnetically induced currents (GICs). GICs flow along the electric power transmission cables connecting the transformers and between the grounding points of power transformers during significant geomagnetic storms. Geomagnetically induced currents have been studied in other regions and have been noted to affect the power grid network. In Zimbabwe, grid failures have been experienced, but it is yet to be proven if these failures have been due to GICs. The purpose of this paper is to characterize geomagnetically induced currents with a power grid network. This paper analyses data collected, which is geomagnetic data, which includes the Kp index, DST index, and the G-Scale from geomagnetic storms and also analyses power grid data, which includes reactive power, relay tripping, and alarms from high voltage substations and then correlates the data. This research analysis was first theoretically analyzed by studying geomagnetic parameters and then experimented upon. To correlate, MATLAB was used as the basic software to analyze the data. Latitudes of the substations were also brought into scrutiny to note if they were an impact due to the location as low latitudes areas like most parts of Zimbabwe, there are less severe geomagnetic variations. Based on theoretical and graphical analysis, it has been proven that there is a slight relationship between power system failures and GICs. Further analyses can be done by implementing measuring instruments to measure any currents in the grounding of high-voltage transformers when geomagnetic storms occur. Mitigation measures can then be developed to minimize the susceptibility of the power network to GICs.Keywords: adverse space weather, DST index, geomagnetically induced currents, KP index, reactive power
Procedia PDF Downloads 1146543 A Double Differential Chaos Shift Keying Scheme for Ultra-Wideband Chaotic Communication Technology Applied in Low-Rate Wireless Personal Area Network
Authors: Ghobad Gorji, Hasan Golabi
Abstract:
The goal of this paper is to describe the design of an ultra-wideband (UWB) system that is optimized for the low-rate wireless personal area network application. To this aim, we propose a system based on direct chaotic communication (DCC) technology. Based on this system, a 2-GHz wide chaotic signal is directly generated into the lower band of the UWB spectrum, i.e., 3.1–5.1 GHz. For this system, two simple modulation schemes, namely chaotic on-off keying (COOK) and differential chaos shift keying (DCSK), were studied before, and their performance was evaluated. We propose a modulation scheme, namely Double DCSK, to improve the performance of UWB DCC. Different characteristics of these systems, with Monte Carlo simulations based on the Additive White Gaussian Noise (AWGN) and the IEEE 802.15.4a standard channel models, are compared.Keywords: UWB, DCC, IEEE 802.15.4a, COOK, DCSK
Procedia PDF Downloads 746542 Attachment and Memories: Activating Attachment in College Students through Narrative-Based Methods
Authors: Catherine Wright, Kate Luedke
Abstract:
This paper questions whether or not individuals who had been exposed to narratives describing secure and insecure-avoidant attachment styles experienced temporary changes in their attachment style when compared to individuals who had been exposed to neutral narratives. The Attachment Style Questionnaire (or ASQ) developed by Feeney, Noller, and Hanrahan in 1994 was utilized to assess attachment style. Participants filled out a truncated version of the ASQ prior to reading the respective narratives assigned to their groups, and filled out the entirety of the ASQ after reading the narratives. Utilizing a one-way independent groups ANOVA, researchers found that the group which read the insecure-avoidant narrative experienced a statistically significant decrease in secure attachment, as did the group which read the secure narrative. The control group, however, experienced a statistically significant increase in secure attachment. Based on these findings, researchers concluded that narratives may have the ability to call attention to parental shortcomings that individuals have experienced in the forms of reminding individuals of positive experiences that they were not able to experience while spending time with their parental figures and calling attention to the shortcomings of said parental figures by reminding them of the negative experiences which they did have with them.Keywords: attachment, insecure-avoidant, memory, secure
Procedia PDF Downloads 4026541 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model
Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok
Abstract:
The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.Keywords: functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity
Procedia PDF Downloads 1516540 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 3186539 Why is the Recurrence Rate of Residual or Recurrent Disease Following Endoscopic Mucosal Resection (EMR) of the Oesophageal Dysplasia’s and T1 Tumours Higher in the Greater Midlands Cancer Network?
Authors: Harshadkumar Rajgor, Jeff Butterworth
Abstract:
Background: Barretts oesophagus increases the risk of developing oesophageal adenocarcinoma. Over the last 40 years, there has been a 6 fold increase in the incidence of oesophageal adenocarcinoma in the western world and the incidence rates are increasing at a greater rate than cancers of the colon, breast and lung. Endoscopic mucosal resection (EMR) is a relatively new technique being used by 2 centres in the greater midlands cancer network. EMR can be used for curative or staging purposes, for high-grade dysplasia’s and T1 tumours of the oesophagus. EMR is also suitable for those who are deemed high risk for oesophagectomy. EMR has a recurrence rate of 21% according to the Wiesbaden data. Method: A retrospective study of prospectively collected data was carried out involving 24 patients who had EMR for curative or staging purposes. Complications of residual or recurrent disease following EMR that required further treatment were investigated. Results: In 54% of cases residual or recurrent disease was suspected. 96% of patients were given clear and concise information regarding their diagnosis of high-grade dysplasia or T1 tumours. All 24 patients consulted the same specialist healthcare team. Conclusion: EMR is a safe and effective treatment for patients who have high-grade dysplasia and T1NO tumours. In 54% of cases residual or recurrent disease was suspected. Initially, only single resections were undertaken. Multiple resections are now being carried out to reduce the risk of recurrence. Complications from EMR remain low in this series and consisted of a single episode of post procedural bleeding.Keywords: endoscopic mucosal resection, oesophageal dysplasia, T1 tumours, cancer network
Procedia PDF Downloads 3166538 Neural Correlates of Diminished Humor Comprehension in Schizophrenia: A Functional Magnetic Resonance Imaging Study
Authors: Przemysław Adamczyk, Mirosław Wyczesany, Aleksandra Domagalik, Artur Daren, Kamil Cepuch, Piotr Błądziński, Tadeusz Marek, Andrzej Cechnicki
Abstract:
The present study aimed at evaluation of neural correlates of humor comprehension impairments observed in schizophrenia. To investigate the nature of this deficit in schizophrenia and to localize cortical areas involved in humor processing we used functional magnetic resonance imaging (fMRI). The study included chronic schizophrenia outpatients (SCH; n=20), and sex, age and education level matched healthy controls (n=20). The task consisted of 60 stories (setup) of which 20 had funny, 20 nonsensical and 20 neutral (not funny) punchlines. After the punchlines were presented, the participants were asked to indicate whether the story was comprehensible (yes/no) and how funny it was (1-9 Likert-type scale). fMRI was performed on a 3T scanner (Magnetom Skyra, Siemens) using 32-channel head coil. Three contrasts in accordance with the three stages of humor processing were analyzed in both groups: abstract vs neutral stories - incongruity detection; funny vs abstract - incongruity resolution; funny vs neutral - elaboration. Additionally, parametric modulation analysis was performed using both subjective ratings separately in order to further differentiate the areas involved in incongruity resolution processing. Statistical analysis for behavioral data used U Mann-Whitney test and Bonferroni’s correction, fMRI data analysis utilized whole-brain voxel-wise t-tests with 10-voxel extent threshold and with Family Wise Error (FWE) correction at alpha = 0.05, or uncorrected at alpha = 0.001. Between group comparisons revealed that the SCH subjects had attenuated activation in: the right superior temporal gyrus in case of irresolvable incongruity processing of nonsensical puns (nonsensical > neutral); the left medial frontal gyrus in case of incongruity resolution processing of funny puns (funny > nonsensical) and the interhemispheric ACC in case of elaboration of funny puns (funny > neutral). Additionally, the SCH group revealed weaker activation during funniness ratings in the left ventro-medial prefrontal cortex, the medial frontal gyrus, the angular and the supramarginal gyrus, and the right temporal pole. In comprehension ratings the SCH group showed suppressed activity in the left superior and medial frontal gyri. Interestingly, these differences were accompanied by protraction of time in both types of rating responses in the SCH group, a lower level of comprehension for funny punchlines and a higher funniness for absurd punchlines. Presented results indicate that, in comparison to healthy controls, schizophrenia is characterized by difficulties in humor processing revealed by longer reaction times, impairments of understanding jokes and finding nonsensical punchlines more funny. This is accompanied by attenuated brain activations, especially in the left fronto-parietal and the right temporal cortices. Disturbances of the humor processing seem to be impaired at the all three stages of the humor comprehension process, from incongruity detection, through its resolution to elaboration. The neural correlates revealed diminished neural activity of the schizophrenia brain, as compared with the control group. The study was supported by the National Science Centre, Poland (grant no 2014/13/B/HS6/03091).Keywords: communication skills, functional magnetic resonance imaging, humor, schizophrenia
Procedia PDF Downloads 2136537 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 316536 d-Block Metal Nanoparticles Confined in Triphenylphosphine Oxide Functionalized Core-Crosslinked Micelles for the Application in Biphasic Hydrogenation
Authors: C. Joseph Abou-Fayssal, K. Philippot, R. Poli, E. Manoury, A. Riisager
Abstract:
The use of soluble polymer-supported metal nanoparticles (MNPs) has received significant attention for the ease of catalyst recovery and recycling. Of particular interest are MNPs that are supported on polymers that are either soluble or form stable colloidal dispersion in water, as this allows to combine of the advantages of the aqueous biphasic protocol with the catalytical performances of MNPs. The objective is to achieve good confinement of the catalyst in the nanoreactor cores and, thus, a better catalyst recovery in order to overcome the previously witnessed MNP extraction. Inspired by previous results, we are interested in the design of polymeric nanoreactors functionalized with ligands able to solidly anchor metallic nanoparticles in order to control the activity and selectivity of the developed nanocatalysts. The nanoreactors are core-crosslinked micelles (CCM) synthesized by reversible addition-fragmentation chain transfer (RAFT) polymerization. Varying the nature of the core-linked functionalities allows us to get differently stabilized metal nanoparticles and thus compare their performance in the catalyzed aqueous biphasic hydrogenation of model substrates. Particular attention is given to catalyst recyclability.Keywords: biphasic catalysis, metal nanoparticles, polymeric nanoreactors, catalyst recovery, RAFT polymerization
Procedia PDF Downloads 1006535 Fabrication and Characterization Analysis of La-Sr-Co-Fe-O Perovskite Hollow Fiber Catalyst for Oxygen Removal in Landfill Gas
Authors: Seong Woon Lee, Soo Min Lim, Sung Sik Jeong, Jung Hoon Park
Abstract:
The atmospheric concentration of greenhouse gas (GHG, Green House Gas) is increasing continuously as a result of the combustion of fossil fuels and industrial development. In response to this trend, many researches have been conducted on the reduction of GHG. Landfill gas (LFG, Land Fill Gas) is one of largest sources of GHG emissions containing the methane (CH₄) as a major constituent and can be considered renewable energy sources as well. In order to use LFG by connecting to the city pipe network, it required a process for removing impurities. In particular, oxygen must be removed because it can cause corrosion of pipes and engines. In this study, methane oxidation was used to eliminate oxygen from LFG and perovskite-type ceramic catalysts of La-Sr-Co-Fe-O composition was selected as a catalyst. Hollow fiber catalysts (HFC, Hollow Fiber Catalysts) have attracted attention as a new concept alternative because they have high specific surface area and mechanical strength compared to other types of catalysts. HFC was prepared by a phase-inversion/sintering technique using commercial La-Sr-Co-Fe-O powder. In order to measure the catalysts' activity, simulated LFG was used for feed gas and complete oxidation reaction of methane was confirmed. Pore structure of the HFC was confirmed by SEM image and perovskite structure of single phase was analyzed by XRD. In addition, TPR analysis was performed to verify the oxygen adsorption mechanism of the HFC. Acknowledgement—The project is supported by the ‘Global Top Environment R&D Program’ in the ‘R&D Center for reduction of Non-CO₂ Greenhouse gases’ (Development and demonstration of oxygen removal technology of landfill gas) funded by Korea Ministry of Environment (ME).Keywords: complete oxidation, greenhouse gas, hollow fiber catalyst, land fill gas, oxygen removal, perovskite catalyst
Procedia PDF Downloads 1176534 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 1086533 Semantic Platform for Adaptive and Collaborative e-Learning
Authors: Massra M. Sabeima, Myriam lamolle, Mohamedade Farouk Nanne
Abstract:
Adapting the learning resources of an e-learning system to the characteristics of the learners is an important aspect to consider when designing an adaptive e-learning system. However, this adaptation is not a simple process; it requires the extraction, analysis, and modeling of user information. This implies a good representation of the user's profile, which is the backbone of the adaptation process. Moreover, during the e-learning process, collaboration with similar users (same geographic province or knowledge context) is important. Productive collaboration motivates users to continue or not abandon the course and increases the assimilation of learning objects. The contribution of this work is the following: we propose an adaptive e-learning semantic platform to recommend learning resources to learners, using ontology to model the user profile and the course content, furthermore an implementation of a multi-agent system able to progressively generate the learning graph (taking into account the user's progress, and the changes that occur) for each user during the learning process, and to synchronize the users who collaborate on a learning object.Keywords: adaptative learning, collaboration, multi-agent, ontology
Procedia PDF Downloads 1766532 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 1396531 Optimum Parameter of a Viscous Damper for Seismic and Wind Vibration
Authors: Soltani Amir, Hu Jiaxin
Abstract:
Determination of optimal parameters of a passive control system device is the primary objective of this study. Expanding upon the use of control devices in wind and earthquake hazard reduction has led to development of various control systems. The advantage of non-linearity characteristics in a passive control device and the optimal control method using LQR algorithm are explained in this study. Finally, this paper introduces a simple approach to determine optimum parameters of a nonlinear viscous damper for vibration control of structures. A MATLAB program is used to produce the dynamic motion of the structure considering the stiffness matrix of the SDOF frame and the non-linear damping effect. This study concluded that the proposed system (variable damping system) has better performance in system response control than a linear damping system. Also, according to the energy dissipation graph, the total energy loss is greater in non-linear damping system than other systems.Keywords: passive control system, damping devices, viscous dampers, control algorithm
Procedia PDF Downloads 4706530 Human-Centric Sensor Networks for Comfort and Productivity in Offices: Integrating Environmental, Body Area Network, and Participatory Sensing
Authors: Chenlu Zhang, Wanni Zhang, Florian Schaule
Abstract:
Indoor environment in office buildings directly affects comfort, productivity, health, and well-being of building occupants. Wireless environmental sensor networks have been deployed in many modern offices to monitor and control the indoor environments. However, indoor environmental variables are not strong enough predictors of comfort and productivity levels of every occupant due to personal differences, both physiologically and psychologically. This study proposes human-centric sensor networks that integrate wireless environmental sensors, body area network sensors and participatory sensing technologies to collect data from both environment and human and support building operations. The sensor networks have been tested in one small-size and one medium-size office rooms with 22 participants for five months. Indoor environmental data (e.g., air temperature and relative humidity), physiological data (e.g., skin temperature and Galvani skin response), and physiological responses (e.g., comfort and self-reported productivity levels) were obtained from each participant and his/her workplace. The data results show that: (1) participants have different physiological and physiological responses in the same environmental conditions; (2) physiological variables are more effective predictors of comfort and productivity levels than environmental variables. These results indicate that the human-centric sensor networks can support human-centric building control and improve comfort and productivity in offices.Keywords: body area network, comfort and productivity, human-centric sensors, internet of things, participatory sensing
Procedia PDF Downloads 1396529 In-door Localization Algorithm and Appropriate Implementation Using Wireless Sensor Networks
Authors: Adeniran K. Ademuwagun, Alastair Allen
Abstract:
The relationship dependence between RSS and distance in an enclosed environment is an important consideration because it is a factor that can influence the reliability of any localization algorithm founded on RSS. Several algorithms effectively reduce the variance of RSS to improve localization or accuracy performance. Our proposed algorithm essentially avoids this pitfall and consequently, its high adaptability in the face of erratic radio signal. Using 3 anchors in close proximity of each other, we are able to establish that RSS can be used as reliable indicator for localization with an acceptable degree of accuracy. Inherent in this concept, is the ability for each prospective anchor to validate (guarantee) the position or the proximity of the other 2 anchors involved in the localization and vice versa. This procedure ensures that the uncertainties of radio signals due to multipath effects in enclosed environments are minimized. A major driver of this idea is the implicit topological relationship among sensors due to raw radio signal strength. The algorithm is an area based algorithm; however, it does not trade accuracy for precision (i.e the size of the returned area).Keywords: anchor nodes, centroid algorithm, communication graph, radio signal strength
Procedia PDF Downloads 5086528 Low-Noise Amplifier Design for Improvement of Communication Range for Wake-Up Receiver Based Wireless Sensor Network Application
Authors: Ilef Ketata, Mohamed Khalil Baazaoui, Robert Fromm, Ahmad Fakhfakh, Faouzi Derbel
Abstract:
The integration of wireless communication, e. g. in real-or quasi-real-time applications, is related to many challenges such as energy consumption, communication range, latency, quality of service, and reliability. To minimize the latency without increasing energy consumption, wake-up receiver (WuRx) nodes have been introduced in recent works. Low-noise amplifiers (LNAs) are introduced to improve the WuRx sensitivity but increase the supply current severely. Different WuRx approaches exist with always-on, power-gated, or duty-cycled receiver designs. This paper presents a comparative study for improving communication range and decreasing the energy consumption of wireless sensor nodes.Keywords: wireless sensor network, wake-up receiver, duty-cycled, low-noise amplifier, envelope detector, range study
Procedia PDF Downloads 1136527 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation
Procedia PDF Downloads 1386526 Clay Hydrogel Nanocomposite for Controlled Small Molecule Release
Authors: Xiaolin Li, Terence Turney, John Forsythe, Bryce Feltis, Paul Wright, Vinh Truong, Will Gates
Abstract:
Clay-hydrogel nanocomposites have attracted great attention recently, mainly because of their enhanced mechanical properties and ease of fabrication. Moreover, the unique platelet structure of clay nanoparticles enables the incorporation of bioactive molecules, such as proteins or drugs, through ion exchange, adsorption or intercalation. This study seeks to improve the mechanical and rheological properties of a novel hydrogel system, copolymerized from a tetrapodal polyethylene glycol (PEG) thiol and a linear, triblock PEG-PPG-PEG (PPG: polypropylene glycol) α,ω-bispropynoate polymer, with the simultaneous incorporation of various amounts of Na-saturated, montmorillonite clay (MMT) platelets (av. lateral dimension = 200 nm), to form a bioactive three-dimensional network. Although the parent hydrogel has controlled swelling ability and its PEG groups have good affinity for the clay platelets, it suffers from poor mechanical stability and is currently unsuitable for potential applications. Nanocomposite hydrogels containing 4wt% MMT showed a twelve-fold enhancement in compressive strength, reaching 0.75MPa, and also a three-fold acceleration in gelation time, when compared with the parent hydrogel. Interestingly, clay nanoplatelet incorporation into the hydrogel slowed down the rate of its dehydration in air. Preliminary results showed that protein binding by the MMT varied with the nature of the protein, as horseradish peroxidase (HRP) was more strongly bound than bovine serum albumin. The HRP was no longer active when bound, presumably as a result of extensive structural refolding. Further work is being undertaken to assess protein binding behaviour within the nanocomposite hydrogel for potential diabetic wound healing applications.Keywords: hydrogel, nanocomposite, small molecule, wound healing
Procedia PDF Downloads 2696525 Design of Compact Dual-Band Planar Antenna for WLAN Systems
Authors: Anil Kumar Pandey
Abstract:
A compact planar monopole antenna with dual-band operation suitable for wireless local area network (WLAN) application is presented in this paper. The antenna occupies an overall area of 18 ×12 mm2. The antenna is fed by a coplanar waveguide (CPW) transmission line and it combines two folded strips, which radiates at 2.4 and 5.2 GHz. In the proposed antenna, by optimally selecting the antenna dimensions, dual-band resonant modes with a much wider impedance matching at the higher band can be produced. Prototypes of the obtained optimized design have been simulated using EM solver. The simulated results explore good dual-band operation with -10 dB impedance bandwidths of 50 MHz and 2400 MHz at bands of 2.4 and 5.2 GHz, respectively, which cover the 2.4/5.2/5.8 GHz WLAN operating bands. Good antenna performances such as radiation patterns and antenna gains over the operating bands have also been observed. The antenna with a compact size of 18×12×1.6 mm3 is designed on an FR4 substrate with a dielectric constant of 4.4.Keywords: CPW antenna, dual-band, electromagnetic simulation, wireless local area network (WLAN)
Procedia PDF Downloads 2096524 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla
Abstract:
Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time
Procedia PDF Downloads 1776523 Radar Charts Analysis to Compare the Level of Innovation in Mexico with Most Innovative Countries in Triple Helix Schema Economic and Human Factor Dimension
Authors: M. Peña Aguilar Juan, Valencia Luis, Pastrana Alberto, Nava Estefany, A. Martinez, M. Vivanco, A. Castañeda
Abstract:
This paper seeks to compare the innovation of Mexico from an economic and human perspective, with the seven most innovative countries according to the Global Innovation Index 2013, done by the World Intellectual Property Organization (WIPO). The above analysis suggests nine dimensions: Expenditure on R & D, intellectual property, appropriate environment to conduct business, economic stability, and triple helix for R & D, ICT Infrastructure, education, human resources and quality of life. Each dimension is represented by an indicator which is later used to construct a radial graph that compares the innovative capacity of the countries analysed. As a result, it is proposed a new indicator of innovation called The Area of Innovation. Observations are made from the results, and finally as a conclusion, those items or dimensions in which Mexico suffers lag in innovation are identify.Keywords: dimension, measure, innovation level, economy, radar chart
Procedia PDF Downloads 4726522 Association of Sensory Processing and Cognitive Deficits in Children with Autism Spectrum Disorders – Pioneer Study in Saudi Arabia
Authors: Rana Zeina
Abstract:
Objective: The association between Sensory problems and cognitive abilities has been studied in individuals with Autism Spectrum Disorders (ASDs). In this study, we used a neuropsychological test to evaluate memory and attention in ASDs children with sensory problems compared to the ASDs children without sensory problems. Methods: Four visual memory tests of Cambridge Neuropsychological Test Automated Battery (CANTAB) including Big/Little Circle (BLC), Simple Reaction Time (SRT), Intra/Extra Dimensional Set Shift (IED), Spatial Recognition Memory (SRM), were administered to 14 ASDs children with sensory problems compared to 13 ASDs without sensory problems aged 3 to 12 with IQ of above 70. Results: ASDs Individuals with sensory problems performed worse than the ASDs group without sensory problems on comprehension, learning, reversal and simple reaction time tasks, and no significant difference between the two groups was recorded in terms of the visual memory and visual comprehension tasks. Conclusion: The findings of this study suggest that ASDs children with sensory problems are facing deficits in learning, comprehension, reversal, and speed of response to stimuli.Keywords: visual memory, attention, autism spectrum disorders, CANTAB eclipse
Procedia PDF Downloads 4516521 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 1606520 Relay Mining: Verifiable Multi-Tenant Distributed Rate Limiting
Authors: Daniel Olshansky, Ramiro Rodrıguez Colmeiro
Abstract:
Relay Mining presents a scalable solution employing probabilistic mechanisms and crypto-economic incentives to estimate RPC volume usage, facilitating decentralized multitenant rate limiting. Network traffic from individual applications can be concurrently serviced by multiple RPC service providers, with costs, rewards, and rate limiting governed by a native cryptocurrency on a distributed ledger. Building upon established research in token bucket algorithms and distributed rate-limiting penalty models, our approach harnesses a feedback loop control mechanism to adjust the difficulty of mining relay rewards, dynamically scaling with network usage growth. By leveraging crypto-economic incentives, we reduce coordination overhead costs and introduce a mechanism for providing RPC services that are both geopolitically and geographically distributed.Keywords: remote procedure call, crypto-economic, commit-reveal, decentralization, scalability, blockchain, rate limiting, token bucket
Procedia PDF Downloads 546519 Contrastive Learning for Unsupervised Object Segmentation in Sequential Images
Authors: Tian Zhang
Abstract:
Unsupervised object segmentation aims at segmenting objects in sequential images and obtaining the mask of each object without any manual intervention. Unsupervised segmentation remains a challenging task due to the lack of prior knowledge about these objects. Previous methods often require manually specifying the action of each object, which is often difficult to obtain. Instead, this paper does not need action information of objects and automatically learns the actions and relations among objects from the structured environment. To obtain the object segmentation of sequential images, the relationships between objects and images are extracted to infer the action and interaction of objects based on the multi-head attention mechanism. Three types of objects’ relationships in the object segmentation task are proposed: the relationship between objects in the same frame, the relationship between objects in two frames, and the relationship between objects and historical information. Based on these relationships, the proposed model (1) is effective in multiple objects segmentation tasks, (2) just needs images as input, and (3) produces better segmentation results as more relationships are considered. The experimental results on multiple datasets show that this paper’s method achieves state-of-art performance. The quantitative and qualitative analyses of the result are conducted. The proposed method could be easily extended to other similar applications.Keywords: unsupervised object segmentation, attention mechanism, contrastive learning, structured environment
Procedia PDF Downloads 1096518 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving
Authors: Svenja Pieritz, Jakab Pilaszanovich
Abstract:
Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing
Procedia PDF Downloads 1326517 An Evaluation of the Lae City Road Network Improvement Project
Authors: Murray Matarab Konzang
Abstract:
Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.Keywords: Lae, road network, highway, vehicle traffic, planning
Procedia PDF Downloads 357