Search results for: spatial transformer network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6951

Search results for: spatial transformer network

4821 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development

Authors: Sinisa J. Vukicevic

Abstract:

Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.

Keywords: urban growth model, scenario development, spatial indicators, Metronamica

Procedia PDF Downloads 84
4820 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter

Authors: B. Kędra, R. Małkowski

Abstract:

This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.

Keywords: power converter, Simulink Real-Time, Matlab, load, tap controller

Procedia PDF Downloads 230
4819 The Interplay between Autophagy and Macrophages' Polarization in Wound Healing: A Genetic Regulatory Network Analysis

Authors: Mayada Mazher, Ahmed Moustafa, Ahmed Abdellatif

Abstract:

Background: Autophagy is a eukaryotic, highly conserved catabolic process implicated in many pathophysiologies such as wound healing. Autophagy-associated genes serve as a scaffolding platform for signal transduction of macrophage polarization during the inflammatory phase of wound healing and tissue repair process. In the current study, we report a model for the interplay between autophagy-associated genes and macrophages polarization associated genes. Methods: In silico analysis was performed on 249 autophagy-related genes retrieved from the public autophagy database and gene expression data retrieved from Gene Expression Omnibus (GEO); GSE81922 and GSE69607 microarray data macrophages polarization 199 DEGS. An integrated protein-protein interaction network was constructed for autophagy and macrophage gene sets. The gene sets were then used for GO terms pathway enrichment analysis. Common transcription factors for autophagy and macrophages' polarization were identified. Finally, microRNAs enriched in both autophagy and macrophages were predicated. Results: In silico prediction of common transcription factors in DEGs macrophages and autophagy gene sets revealed a new role for the transcription factors, HOMEZ, GABPA, ELK1 and REL, that commonly regulate macrophages associated genes: IL6,IL1M, IL1B, NOS1, SOC3 and autophagy-related genes: Atg12, Rictor, Rb1cc1, Gaparab1, Atg16l1. Conclusions: Autophagy and macrophages' polarization are interdependent cellular processes, and both autophagy-related proteins and macrophages' polarization related proteins coordinate in tissue remodelling via transcription factors and microRNAs regulatory network. The current work highlights a potential new role for transcription factors HOMEZ, GABPA, ELK1 and REL in wound healing.

Keywords: autophagy related proteins, integrated network analysis, macrophages polarization M1 and M2, tissue remodelling

Procedia PDF Downloads 132
4818 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 117
4817 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique

Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran

Abstract:

Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.

Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity

Procedia PDF Downloads 136
4816 Net-Trainer-ST: A Swiss Army Knife for Pentesting, Based on Single Board Computer, for Cybersecurity Professionals and Hobbyists

Authors: K. Hołda, D. Śliwa, K. Daniec, A. Nawrat

Abstract:

This article was created as part of the developed master's thesis. It attempts to present a newly developed device, which will support the work of specialists dealing with broadly understood cybersecurity terms. The device is contrived to automate security tests. In addition, it simulates potential cyberattacks in the most realistic way possible, without causing permanent damage to the network, in order to maximize the quality of the subsequent corrections to the tested network systems. The proposed solution is a fully operational prototype created from commonly available electronic components and a single board computer. The focus of the following article is not only put on the hardware part of the device but also on the theoretical and applicatory way in which implemented cybersecurity tests operate and examples of their results.

Keywords: Raspberry Pi, ethernet, automated cybersecurity tests, ARP, DNS, backdoor, TCP, password sniffing

Procedia PDF Downloads 109
4815 The Rise of Darknet: A Call for Understanding Online Communication of Terrorist Groups in Indonesia

Authors: Aulia Dwi Nastiti

Abstract:

A number of studies and reports on terrorism have continuously addressed the role of internet and online activism to support terrorist and extremist groups. In particular, they stress on social media’s usage that generally serves for the terrorist’s propaganda as well as justification of their causes. While those analyses are important to understand how social media is a vital tool for global network terrorism, they are inadequate to explain the online communication patterns that enable terrorism acts. Beyond apparent online narratives, there is a deep cyber sphere where the very vein of terrorism movement lies. That is a hidden space in the internet called ‘darknet’. Recent investigations, particularly in Middle Eastern context, have shed some lights that this invisible space in the internet is fundamental to maintain the terrorist activities. Despite that, limited number of research examines darknet within the issue of terrorist movements in Indonesian context—where the country is considered quite a hotbed for extremist groups. Therefore, this paper attempts to provide an insight of darknet operation in Indonesian cases. By exploring how the darknet is used by the Indonesian-based extremist groups, this paper maps out communication patterns of terrorist groups on the internet which appear as an intermingled network. It shows the usage of internet is differentiated in different level of anonymity for distinctive purposes. This paper further argues that the emerging terrorist communication network calls for a more comprehensive counterterrorism strategy on the Internet.

Keywords: communication pattern, counterterrorism, darknet, extremist groups, terrorism

Procedia PDF Downloads 279
4814 Lessons from Implementation of a Network-Wide Safety Huddle in Behavioral Health

Authors: Deborah Weidner, Melissa Morgera

Abstract:

The model of care delivery in the Behavioral Health Network (BHN) is integrated across all five regions of Hartford Healthcare and thus spans the entirety of the state of Connecticut, with care provided in seven inpatient settings and over 30 ambulatory outpatient locations. While safety has been a core priority of the BHN in alignment with High Reliability practices, safety initiatives have historically been facilitated locally in each region or within each entity, with interventions implemented locally as opposed to throughout the network. To address this, the BHN introduced a network wide Safety Huddle during 2022. Launched in January, the BHN Safety Huddle brought together internal stakeholders, including medical and administrative leaders, along with executive institute leadership, quality, and risk management. By bringing leaders together and introducing a network-wide safety huddle into the way we work, the benefit has been an increase in awareness of safety events occurring in behavioral health areas as well as increased systemization of countermeasures to prevent future events. One significant discussion topic presented in huddles has pertained to environmental design and patient access to potentially dangerous items, addressing some of the most relevant factors resulting in harm to patients in inpatient and emergency settings for behavioral health patients. The safety huddle has improved visibility of potential environmental safety risks through the generation of over 15 safety alerts cascaded throughout the BHN and also spurred a rapid improvement project focused on standardization of patient belonging searches to reduce patient access to potentially dangerous items on inpatient units. Safety events pertaining to potentially dangerous items decreased by 31% as a result of standardized interventions implemented across the network and as a result of increased awareness. A second positive outcome originating from the BHN Safety Huddle was implementation of a recommendation to increase the emergency Narcan®(naloxone) supply on hand in ambulatory settings of the BHN after incidents involving accidental overdose resulted in higher doses of naloxone administration. By increasing the emergency supply of naloxone on hand in all ambulatory and residential settings, colleagues are better prepared to respond in an emergency situation should a patient experience an overdose while on site. Lastly, discussions in safety huddle spurred a new initiative within the BHN to improve responsiveness to assaultive incidents through a consultation service. This consult service, aligned with one of the network’s improvement priorities to reduce harm events related to assaultive incidents, was borne out of discussion in huddle in which it was identified that additional interventions may be needed in providing clinical care to patients who are experiencing multiple and/ or frequent safety events.

Keywords: quality, safety, behavioral health, risk management

Procedia PDF Downloads 74
4813 Secure Proxy Signature Based on Factoring and Discrete Logarithm

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

A digital signature is an electronic signature form used by an original signer to sign a specific document. When the original signer is not in his office or when he/she travels outside, he/she delegates his signing capability to a proxy signer and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on factoring and discrete logarithm problem.

Keywords: discrete logarithm, factoring, proxy signature, key agreement

Procedia PDF Downloads 291
4812 A Neural Approach for the Offline Recognition of the Arabic Handwritten Words of the Algerian Departments

Authors: Salim Ouchtati, Jean Sequeira, Mouldi Bedda

Abstract:

In this work we present an off line system for the recognition of the Arabic handwritten words of the Algerian departments. The study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the handwritten word by several methods: the parameters of distribution, the moments centered of the different projections and the Barr features. It should be noted that these methods are applied on segments gotten after the division of the binary image of the word in six segments. The classification is achieved by a multi layers perceptron. Detailed experiments are carried and satisfactory recognition results are reported.

Keywords: handwritten word recognition, neural networks, image processing, pattern recognition, features extraction

Procedia PDF Downloads 501
4811 Internal Displacement in Iraq due to ISIS Occupation and Its Effects on Human Security and Coexistence

Authors: Feisal Khudher Mahmood, Abdul Samad Rahman Sultan

Abstract:

Iraq had been a diverse society with races, cultures and religions that peacefully coexistence. The phenomenon of internal displacement occurred after April 2003, because of political instability as will as the deterioration of the political and security situation as a result of United States of America occupation. Biggest internal displacement have occurred (and keep happening) since 10th of June 2014 due to rise of Islamic State of Iraq and Syria (ISIS) and it’s occupation of one third of country territories. This crisis effected directly 3,275,000 people and reflected negatively on the social fabric of Iraq community and led to waves of sectorial violence that swept the country. Internal displaced communities are vulnerable, especially under non functional and weak government, that led to lose of essential human rights and dignity. Using Geographic Information System (GIS) and Geospatial Techniques, two types of internal displacement have been found; voluntary and forced. Both types of displacement are highly influenced by location, race and religion. The main challenge for Iraqi government and NGOs will be after defeating ISIS. Helping the displaced to resettle within their community and to re-establish the coexistence. By spatial-statical analysis hot spots of future conflicts among displaced community have been highlighted. This will help the government to tackle future conflicts before they occur. Also, it will be the base for social conflict early warning system.

Keywords: internal displacement, Iraq, ISIS, human security, human rights, GIS, spatial-statical analysis

Procedia PDF Downloads 511
4810 ATC in Competitive Electricity Market Using TCSC

Authors: S. K. Gupta, Richa Bansal

Abstract:

In a deregulated power system structure, power producers, and customers share a common transmission network for wheeling power from the point of generation to the point of consumption. All parties in this open access environment may try to purchase the energy from the cheaper source for greater profit margins, which may lead to overloading and congestion of certain corridors of the transmission network. This may result in violation of line flow, voltage and stability limits and thereby undermine the system security. Utilities therefore need to determine adequately their Available Transfer Capability (ATC) to ensure that system reliability is maintained while serving a wide range of bilateral and multilateral transactions. This paper presents power transfer distribution factor based on AC load flow for the determination and enhancement of ATC. The study has been carried out for IEEE 24 bus Reliability Test System.

Keywords: available transfer capability, FACTS devices, power transfer distribution factors, electric

Procedia PDF Downloads 489
4809 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 105
4808 Smart in Performance: More to Practical Life than Hardware and Software

Authors: Faten Hatem

Abstract:

This paper promotes the importance of focusing on spatial aspects and affective factors that impact smart urbanism. This helps to better inform city governance, spatial planning, and policymaking to focus on what Smart does and what it can achieve for cities in terms of performance rather than on using the notion for prestige in a worldwide trend towards becoming a smart city. By illustrating how this style of practice compromises the social aspects and related elements of space making through an interdisciplinary comparative approach, the paper clarifies the impact of this compromise on the overall smart city performance. In response, this paper recognizes the importance of establishing a new meaning for urban progress by moving beyond improving basic services of the city to enhance the actual human experience which is essential for the development of authentic smart cities. The topic is presented under five overlooked areas that discuss the relation between smart cities’ potential and efficiency paradox, the social aspect, connectedness with nature, the human factor, and untapped resources. However, these themes are not meant to be discussed in silos, instead, they are presented to collectively examine smart cities in performance, arguing there is more to the practical life of smart cities than software and hardware inventions. The study is based on a case study approach, presenting Milton Keynes as a living example to learn from while engaging with various methods for data collection including multi-disciplinary semi-structured interviews, field observations, and data mining.

Keywords: smart design, the human in the city, human needs and urban planning, sustainability, smart cities, smart

Procedia PDF Downloads 85
4807 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 475
4806 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 11
4805 An Approach to Analyze Testing of Nano On-Chip Networks

Authors: Farnaz Fotovvatikhah, Javad Akbari

Abstract:

Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.

Keywords: test, nano on-chip network, JTAG, modelling

Procedia PDF Downloads 472
4804 Awareness and Utilization of Social Network Tools among Agricultural Science Students in Colleges of Education in Ogun State, Nigeria

Authors: Adebowale Olukayode Efunnowo

Abstract:

This study was carried out to assess the awareness and utilization of Social Network Tools (SNTs) among agricultural science students in Colleges of Education in Ogun State, Nigeria. Simple random sampling techniques were used to select 280 respondents from the study area. Descriptive statistics was used to describe the objectives while Pearson Product Moment Correlation was used to test the hypothesis. The result showed that the majority (71.8%) of the respondents were single, with a mean age of 20 years. Almost all (95.7%) the respondents were aware of Facebook and 2go as a Social Network Tools (SNTs) while 85.0% of the respondents were not aware of Blackplanet, LinkedIn, MyHeritage and Bebo. Many (41.1%) of the respondents had views that using SNTs can enhance extensive literature survey, increase internet browsing potential, promote teaching proficiency, and update on outcomes of researches. However, 51.4% of the respondents perceived that SNTs usage as what is meant for the lecturers/adults only while 16.1% considered it as mainly used by internet fraudsters. Findings revealed that about 50.0% of the respondents browsed Facebook and 2go daily while more than 80% of the respondents used Blackplanet, MyHeritage, Skyrock, Bebo, LinkedIn and My YearBook as the need arise. Major constraints to the awareness and utilization of SNTs were high cost and poor quality of ICTs facilities (77.1%), epileptic power supply (75.0%), inadequate telecommunication infrastructure (71.1%), low technical know-how (62.9%) and inadequate computer knowledge (61.1%). The result of PPMC analysis showed that there was an inverse relationship between constraints and utilization of SNTs at p < 0.05. It can be concluded that constraints affect efficient and effective utilization of SNTs in the study area. It is hereby recommended that management of colleges of education and agricultural institutes should provide good internet connectivity, computer facilities, and alternative power supply in order to increase the awareness and utilization of SNTs among students.

Keywords: awareness, utilization, social network tools, constraints, students

Procedia PDF Downloads 338
4803 Variations in Wood Traits across Major Gymnosperm and Angiosperm Tree Species and the Driving Factors in China

Authors: Meixia Zhang, Chengjun Ji, Wenxuan Han

Abstract:

Many wood traits are important functional attributes for tree species, connected with resource competition among species, community dynamics, and ecosystem functions. Large variations in these traits exist among taxonomic categories, but variation in these traits between gymnosperms and angiosperms is still poorly documented. This paper explores the systematic differences in 12 traits between the two tree categories and the potential effects of environmental factors and life form. Based on a database of wood traits for major gymnosperm and angiosperm tree species across China, the values of 12 wood traits and their driving factors in gymnosperms vs. angiosperms were compared. The results are summarized below: i) Means of wood traits were all significantly lower in gymnosperms than in angiosperms. ii) Air-dried density (ADD) and tangential shrinkage coefficient (TSC) reflect the basic information of wood traits for gymnosperms, while ADD and radial shrinkage coefficient (RSC) represent those for angiosperms, providing higher explanation power when used as the evaluation index of wood traits. iii) For both gymnosperm and angiosperm species, life form exhibits the largest explanation rate for large-scale spatial patterns of ADD, TSC (RSC), climatic factors the next, and edaphic factors have the least effect, suggesting that life form is the dominant factor controlling spatial patterns of wood traits. Variations in the magnitude and key traits between gymnosperms and angiosperms and the same dominant factors might indicate the evolutionary divergence and convergence in key functional traits among woody plants.

Keywords: allometry, functional traits, phylogeny, shrinkage coefficient, wood density

Procedia PDF Downloads 255
4802 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 167
4801 A Palmprint Identification System Based Multi-Layer Perceptron

Authors: David P. Tantua, Abdulkader Helwan

Abstract:

Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.

Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator

Procedia PDF Downloads 360
4800 UniFi: Universal Filter Model for Image Enhancement

Authors: Aleksei Samarin, Artyom Nazarenko, Valentin Malykh

Abstract:

Image enhancement is becoming more and more popular, especially on mobile devices. Nowadays, it is a common approach to enhance an image using a convolutional neural network (CNN). Such a network should be of significant size; otherwise, a possibility for the artifacts to occur is overgrowing. The existing large CNNs are computationally expensive, which could be crucial for mobile devices. Another important flaw of such models is they are poorly interpretable. There is another approach to image enhancement, namely, the usage of predefined filters in combination with the prediction of their applicability. We present an approach following this paradigm, which outperforms both existing CNN-based and filter-based approaches in the image enhancement task. It is easily adaptable for mobile devices since it has only 47 thousand parameters. It shows the best SSIM 0.919 on RANDOM250 (MIT Adobe FiveK) among small models and is thrice faster than previous models.

Keywords: universal filter, image enhancement, neural networks, computer vision

Procedia PDF Downloads 91
4799 Bi-objective Network Optimization in Disaster Relief Logistics

Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann

Abstract:

Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.

Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks

Procedia PDF Downloads 65
4798 Green Closed-Loop Supply Chain Network Design Considering Different Production Technologies Levels and Transportation Modes

Authors: Mahsa Oroojeni Mohammad Javad

Abstract:

Globalization of economic activity and rapid growth of information technology has resulted in shorter product lifecycles, reduced transport capacity, dynamic and changing customer behaviors, and an increased focus on supply chain design in recent years. The design of the supply chain network is one of the most important supply chain management decisions. These decisions will have a long-term impact on the efficacy and efficiency of the supply chain. In this paper, a two-objective mixed-integer linear programming (MILP) model is developed for designing and optimizing a closed-loop green supply chain network that, to the greatest extent possible, includes all real-world assumptions such as multi-level supply chain, the multiplicity of production technologies, and multiple modes of transportation, with the goals of minimizing the total cost of the chain (first objective) and minimizing total emissions of emissions (second objective). The ε-constraint and CPLEX Solver have been used to solve the problem as a single-objective problem and validate the problem. Finally, the sensitivity analysis is applied to study the effect of the real-world parameters’ changes on the objective function. The optimal management suggestions and policies are presented.

Keywords: closed-loop supply chain, multi-level green supply chain, mixed-integer programming, transportation modes

Procedia PDF Downloads 64
4797 The Analysis of Movement Pattern during Reach and Grasp in Stroke Patients: A Kinematic Approach

Authors: Hyo Seon Choi, Ju Sun Kim, DY Kim

Abstract:

Introduction: This study was aimed to evaluate temporo-spatial patterns during the reach and grasp task in hemiplegic stroke patients and to identify movement pattern according to severity of motor impairment. Method: 29 subacute post-stroke patients were enrolled in this study. The temporo-spatial and kinematic data were obtained during reach and grasp task through 3D motion analysis (VICON). The reach and grasp task was composed of four sub-tasks: reach (T1), transport to mouth (T2), transport back to table (T3) and return (T4). The movement time, joint angle and sum of deviation angles from normative data were compared between affected side and unaffected side. They were also compared between two groups (mild to moderate group: 28~66, severe group: 0~27) divided by upper-Fugl-Meyer Assessment (FMA) scale. Result: In affected side, total time and durations of all four tasks were significantly longer than those in unaffected side (p < 0.001). The affected side demonstrated significant larger shoulder abduction, shoulder internal rotation, wrist flexion, wrist pronation, thoracic external rotation and smaller shoulder flexion during reach and grasp task (p < 0.05). The significant differences between mild to moderate group and severe group were observed in total duration, durations of T1, T2, and T3 in reach and grasp task (p < 0.01). The severe group showed significant larger shoulder internal rotation during T2 (p < 0.05) and wrist flexion during T2, T3 (p < 0.05) than mild to moderate group. In range of motion during each task, shoulder abduction-adduction during T2 and T3, shoulder internal-external rotation during T2, elbow flexion-extension during T1 showed significant difference between two groups (p < 0.05). The severe group had significant larger total deviation angles in shoulder internal-external rotation and wrist extension-flexion during reach and grasp task (p < 0.05). Conclusion: This study suggests that post-stroke hemiplegic patients have an unique temporo-spatial and kinematic patterns during reach and grasp task, and the movement pattern may be related to affected upper limb severity. These results may be useful to interpret the motion of upper extremity in stroke patients.

Keywords: Fugl-Meyer Assessment (FMA), motion analysis, reach and grasp, stroke

Procedia PDF Downloads 225
4796 Determination of the Botanical Origin of Honey by the Artificial Neural Network Processing of PARAFAC Scores of Fluorescence Data

Authors: Lea Lenhardt, Ivana Zeković, Tatjana Dramićanin, Miroslav D. Dramićanin

Abstract:

Fluorescence spectroscopy coupled with parallel factor analysis (PARAFAC) and artificial neural networks (ANN) were used for characterization and classification of honey. Excitation emission spectra were obtained for 95 honey samples of different botanical origin (acacia, sunflower, linden, meadow, and fake honey) by recording emission from 270 to 640 nm with excitation in the range of 240-500 nm. Fluorescence spectra were described with a six-component PARAFAC model, and PARAFAC scores were further processed with two types of ANN’s (feed-forward network and self-organizing maps) to obtain algorithms for classification of honey on the basis of their botanical origin. Both ANN’s detected fake honey samples with 100% sensitivity and specificity.

Keywords: honey, fluorescence, PARAFAC, artificial neural networks

Procedia PDF Downloads 938
4795 The Impact on the Network Deflectometry

Authors: Djamel–Eddine Yassine Boutiba

Abstract:

In this present memory, we present the various impacts deflectometer leading to the sizing by strengthening of existing roadways. It reminds that the road network in Algeria plays a major role with regard to drainage in major strategic areas and especially in the fringe northern Algeria. Heavy traffic passing through the northern fringe (between 25% and 30% heavy vehicles) causes substantial degradations at both the surface layer and base layer. The work on site by means within the laboratory CTTP such as deflectographe Lacroix, allowed us to record a large number of deflection localized bending on RN19A (Carrefour CW73-Ain- Merane), whose analysis of the results led us to opt for a building throughout the band's project . By the recorder against HWD (Heavy Weight déflectometer) allowed us to learn about the behavior of the pavement on the banks. In addition, the Software Alize III has been essential in the verification of the increase in the thickness dimensioned.

Keywords: capacity, deflection, deflectograph lacroix, degradation, hwd

Procedia PDF Downloads 270
4794 Development of a Standardization Methodology Assessing the Comfort Performance for Hanok

Authors: Mi-Hyang Lee, Seung-Hoon Han

Abstract:

Korean traditional residences have been built with deep design issues for various values such as social, cultural, and environmental influences to be started from a few thousand years ago, but its meaning is being vanished due to the different lifestyles these days. It is necessary, therefore, to grasp the meaning of the Korea traditional building called Hanok and to get Korean people understand its real advantages. The purpose of this study is to propose a standardization methodology for evaluating comfort features towards Korean traditional houses. This paper is also trying to build an official standard evaluation system and to integrate aesthetic and psychological values induced from Hanok. Its comfort performance values could be divided into two large categories that are physical and psychological, and fourteen methods have been defined as the Korean Standards (KS). For this research, field survey data from representative Hanok types were collected for each method. This study also contains a qualitative in-depth analysis of the Hanok comfort index by the professions using AHP (Analytical Hierarchy Process) and has examined the effect of the methods. As a result, this paper could define what methods can provide trustful outcomes and how to evaluate the own strengths in aspects of spatial comfort of Hanok using suggested procedures towards the spatial configuration of the traditional dwellings. This study has finally proposed an integrated development of a standardization methodology assessing the comfort performance for Korean traditional residences, and it is expected that they could evaluate inhabitants of the residents and interior environmental conditions especially structured by wood materials like Hanok.

Keywords: Hanok, comfort performance, human condition, analytical hierarchy process

Procedia PDF Downloads 141
4793 Flame Retardant Study of Methylol Melamine Phosphate-Treated Cotton Fibre

Authors: Nurudeen Afolami Ayeni, Kasali Bello

Abstract:

Methylolmelamine with increasing degree of methylol substitution and the phosphates derivatives were used to resinate cotton fabric (CF). The resination was carried out at different curing time and curing temperature. Generally, the results show a reduction in the flame propagation rate of the treated fabrics compared to the untreated cotton fabric (CF). While the flame retardancy of methylolmelamine-treated fibre could be attributed to the degree of crosslinking of fibre-resin network which promotes stability, the methylolmelamine phosphate-treated fabrics show better retardancy due to the intumescences action of the phosphate resin upon decomposition in the resin – fabric network.

Keywords: cotton fabric, flame retardant, methylolmelamine, crosslinking, resination

Procedia PDF Downloads 371
4792 Impacts of Aquaculture Farms on the Mangroves Forests of Sundarbans, India (2010-2018): Temporal Changes of NDVI

Authors: Sandeep Thakur, Ismail Mondal, Phani Bhusan Ghosh, Papita Das, Tarun Kumar De

Abstract:

Sundarbans Reserve forest of India has been undergoing major transformations in the recent past owing to population pressure and related changes. This has brought about major changes in the spatial landscape of the region especially in the western parts. This study attempts to assess the impacts of the Landcover changes on the mangrove habitats. Time series imageries of Landsat were used to analyze the Normalized Differential Vegetation Index (NDVI) patterns over the western parts of Indian Sundarbans forest in order to assess the heath of the mangroves in the region. The images were subjected to Land use Land cover (LULC) classification using sub-pixel classification techniques in ERDAS Imagine software and the changes were mapped. The spatial proliferation of aquaculture farms during the study period was also mapped. A multivariate regression analysis was carried out between the obtained NDVI values and the LULC classes. Similarly, the observed meteorological data sets (time series rainfall and minimum and maximum temperature) were also statistically correlated for regression. The study demonstrated the application of NDVI in assessing the environmental status of mangroves as the relationship between the changes in the environmental variables and the remote sensing based indices felicitate an efficient evaluation of environmental variables, which can be used in the coastal zone monitoring and development processes.

Keywords: aquaculture farms, LULC, Mangrove, NDVI

Procedia PDF Downloads 164