Search results for: conventional neural network
5901 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
Authors: Jindong Gu, Matthias Schubert, Volker Tresp
Abstract:
In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.Keywords: one-class classification, outlier detection, generative adversary networks, semi-supervised learning
Procedia PDF Downloads 1515900 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired
Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo
Abstract:
Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems
Procedia PDF Downloads 815899 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan
Authors: Li-Hsueh Chen
Abstract:
Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel
Procedia PDF Downloads 2255898 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 1085897 Modeling and Control Design of a Centralized Adaptive Cruise Control System
Authors: Markus Mazzola, Gunther Schaaf
Abstract:
A vehicle driving with an Adaptive Cruise Control System (ACC) is usually controlled decentrally, based on the information of radar systems and in some publications based on C2X-Communication (CACC) to guarantee stable platoons. In this paper, we present a Model Predictive Control (MPC) design of a centralized, server-based ACC-System, whereby the vehicular platoon is modeled and controlled as a whole. It is then proven that the proposed MPC design guarantees asymptotic stability and hence string stability of the platoon. The Networked MPC design is chosen to be able to integrate system constraints optimally as well as to reduce the effects of communication delay and packet loss. The performance of the proposed controller is then simulated and analyzed in an LTE communication scenario using the LTE/EPC Network Simulator LENA, which is based on the ns-3 network simulator.Keywords: adaptive cruise control, centralized server, networked model predictive control, string stability
Procedia PDF Downloads 5155896 Numerical Resolving of Net Faradaic Current in Fast-Scan Cyclic Voltammetry Considering Induced Charging Currents
Authors: Gabriel Wosiak, Dyovani Coelho, Evaldo B. Carneiro-Neto, Ernesto C. Pereira, Mauro C. Lopes
Abstract:
In this work, the theoretical and experimental effects of induced charging currents on fast-scan cyclic voltammetry (FSCV) are investigated. Induced charging currents arise from the effect of ohmic drop in electrochemical systems, which depends on the presence of an uncompensated resistance. They cause the capacitive contribution to the total current to be different from the capacitive current measured in the absence of electroactive species. The paper shows that the induced charging current is relevant when the capacitive current magnitude is close to the total current, even for systems with low time constant. In these situations, the conventional background subtraction method may be inaccurate. A method is developed that separates the faradaic and capacitive currents by using a combination of voltametric experimental data and finite element simulation, by the obtention of a potential-dependent capacitance. The method was tested in a standard electrochemical cell with Platinum ultramicroelectrodes, in different experimental conditions as well in previously reported data in literature. The proposed method allows the real capacitive current to be separated even in situations where the conventional background subtraction method is clearly inappropriate.Keywords: capacitive current, fast-scan cyclic voltammetry, finite-element method, electroanalysis
Procedia PDF Downloads 755895 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 205894 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 125893 Evaluation of Microbiological Quality and Safety of Two Types of Salads Prepared at Libyan Airline Catering Center in Tripoli
Authors: Elham A. Kwildi, Yahia S. Abugnah, Nuri S. Madi
Abstract:
This study was designed to evaluate the microbiological quality and safety of two types of salads prepared at a catering center affiliated with Libyan Airlines in Tripoli, Libya. Two hundred and twenty-one (221) samples (132 economy-class and 89 first- class) were used in this project which lasted for ten months. Biweekly, microbiological tests were performed which included total plate count (TPC) and total coliforms (TCF), in addition to enumeration and/or detection of some pathogenic bacteria mainly Escherichia coli, Staphylococcus aureus, Bacillus cereus, Salmonella sp, Listeria sp and Vibrio parahaemolyticus parahaemolyticus, By using conventional as well as compact dry methods. Results indicated that TPC of type 1 salad ranged between (<10 – 62 x 103 cfu/gm) and (<10 to 36 x103 cfu/g), while TCF were (<10 – 41 x 103 cfu/gm) and (< 10 to 66 x102 cfu/g) using both methods of detection respectively. On the other hand, TPC of type 2 salad were: (1 × 10 – 52 x 103) and (<10 – 55 x 103 cfu/gm) and in the range of (1 x10 to 45x103 cfu/g), and the (TCF) counts were between (< 10 to 55x103 cfu/g) and (< 10 to 34 x103 cfu/g) using the 1st and the 2nd methods of detection respectively. Also, the pathogens mentioned above were detected in both types of salads, but their levels varied according to the type of salad and the method of detection. The level of Staphylococcus aureus, for instance, was 17.4% using conventional method versus 14.4% using the compact dry method. Similarly, E. coli was 7.6% and 9.8%, while Salmonella sp. recorded the least percentage i.e. 3% and 3.8% with the two mentioned methods respectively. First class salads were also found to contain the same pathogens, but the level of E. coli was relatively higher in this case (14.6% and 16.9%) using conventional and compact dry methods respectively. The second rank came Staphylococcus aureus (13.5%) and (11.2%), followed by Salmonella (6.74%) and 6.70%). The least percentage was for Vibrio parahaemolyticus (4.9%) which was detected in the first class salads only. The other two pathogens Bacillus cereus and Listeria sp. were not detected in either one of the salads. Finally, it is worth mentioning that there was a significant decline in TPC and TCF counts in addition to the disappearance of pathogenic bacteria after the 6-7th month of the study which coincided with the first trial of the HACCP system at the center. The ups and downs in the counts along the early stages of the study reveal that there is a need for some important correction measures including more emphasis on training of the personnel in applying the HACCP system effectively.Keywords: air travel, vegetable salads, foodborne outbreaks, Libya
Procedia PDF Downloads 3265892 Informal Carers in Telemonitoring of Users with Pacemakers: Characteristics, Time of Services Provided and Costs
Authors: Antonio Lopez-Villegas, Rafael Bautista-Mesa, Emilio Robles-Musso, Daniel Catalan-Matamoros, Cesar Leal-Costa
Abstract:
Objectives: The purpose of this trial was to evaluate the burden borne by and the costs to informal caregivers of users with telemonitoring of pacemakers. Methods: This is a controlled, non-randomised clinical trial, with data collected from informal caregivers, five years after implantation of pacemakers. The Spanish version of the Survey on Disabilities, Personal Autonomy, and Dependency Situations was used to get information on clinical and social characteristics, levels of professionalism, duration and types of care, difficulties in providing care, health status, economic and job aspects, impact on the family or leisure due to informal caregiving for patients with pacemakers. Results: After five years of follow-up, 55 users with pacemakers finished the study. Of which, 50 were helped by a caregiver, 18 were included in the telemonitoring group (TM) and 32 in the conventional follow-up group (HM). Overall, females represented 96.0% of the informal caregivers (88.89% in TM and 100.0% in HM group). The mean ages were 63.17 ± 15.92 and 63.13 ± 14.56 years, respectively (p = 0.83) in the groups. The majority (88.0%) of the caregivers declared that they had to provide their services between 6 and 7 days per week (83.33% in TM group versus 90.63% in HM group), without significant differences between both groups. The costs related to care provided by the informal caregivers were 47.04% higher in the conventional follow-up group than in the TM group. Conclusions: The results of this trial confirm that there were no significant differences between the informal caregivers regarding to baseline characteristics, workload and time worked in both groups of follow-up. The costs incurred by the informal caregivers providing care for users with pacemakers included in telemonitoring group are significantly lower than those in the conventional follow-up group. Trial registration: ClinicalTrials.gov NCT02234245. Funding: The PONIENTE study, has been funded by the General Secretariat for Research, Development and Innovation, Regional Government of Andalusia (Spain), project reference number PI/0256/2017, under the research call 'Development and Innovation Projects in the Field of Biomedicine and Health Sciences', 2017.Keywords: costs, disease burden, informal caregiving, pacemaker follow-up, remote monitoring, telemedicine
Procedia PDF Downloads 1425891 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 905890 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique
Authors: Dibakar Chakrabarty, Mebada Suiting
Abstract:
Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM
Procedia PDF Downloads 2485889 Supply Chain Optimisation through Geographical Network Modeling
Authors: Cyrillus Prabandana
Abstract:
Supply chain optimisation requires multiple factors as consideration or constraints. These factors are including but not limited to demand forecasting, raw material fulfilment, production capacity, inventory level, facilities locations, transportation means, and manpower availability. By knowing all manageable factors involved and assuming the uncertainty with pre-defined percentage factors, an integrated supply chain model could be developed to manage various business scenarios. This paper analyse the utilisation of geographical point of view to develop an integrated supply chain network model to optimise the distribution of finished product appropriately according to forecasted demand and available supply. The supply chain optimisation model shows that small change in one supply chain constraint is possible to largely impact other constraints, and the new information from the model should be able to support the decision making process. The model was focused on three areas, i.e. raw material fulfilment, production capacity and finished products transportation. To validate the model suitability, it was implemented in a project aimed to optimise the concrete supply chain in a mining location. The high level of operations complexity and involvement of multiple stakeholders in the concrete supply chain is believed to be sufficient to give the illustration of the larger scope. The implementation of this geographical supply chain network modeling resulted an optimised concrete supply chain from raw material fulfilment until finished products distribution to each customer, which indicated by lower percentage of missed concrete order fulfilment to customer.Keywords: decision making, geographical supply chain modeling, supply chain optimisation, supply chain
Procedia PDF Downloads 3465888 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.Keywords: aggregate data, combined-level data, individual patient data, meta-analysis
Procedia PDF Downloads 3755887 A Model Based Metaheuristic for Hybrid Hierarchical Community Structure in Social Networks
Authors: Radhia Toujani, Jalel Akaichi
Abstract:
In recent years, the study of community detection in social networks has received great attention. The hierarchical structure of the network leads to the emergence of the convergence to a locally optimal community structure. In this paper, we aim to avoid this local optimum in the introduced hybrid hierarchical method. To achieve this purpose, we present an objective function where we incorporate the value of structural and semantic similarity based modularity and a metaheuristic namely bees colonies algorithm to optimize our objective function on both hierarchical level divisive and agglomerative. In order to assess the efficiency and the accuracy of the introduced hybrid bee colony model, we perform an extensive experimental evaluation on both synthetic and real networks.Keywords: social network, community detection, agglomerative hierarchical clustering, divisive hierarchical clustering, similarity, modularity, metaheuristic, bee colony
Procedia PDF Downloads 3795886 Effect of Corrosion on the Shear Buckling Strength
Authors: Myoung-Jin Lee, Sung-Jin Lee, Young-Kon Park, Jin-Wook Kim, Bo-Kyoung Kim, Song-Hun Chong, Sun-Ii Kim
Abstract:
The ability to resist the shear strength arises mainly from the web panel of steel girders and as such, the shear buckling strength of these girders has been extensively investigated. For example, Blaser’s reported that when buckling occurs, the tension field has an effect after the buckling strength of the steel is reached. The findings of these studies have been applied by AASHTO, AISC, and to the European Code that provides guidelines for designs aimed at preventing shear buckling. Steel girders are susceptible to corrosion resulting from exposure to natural elements such as rainfall, humidity, and temperature. This corrosion leads to a reduction in the size of the web panel section, thereby resulting in a decrease in the shear strength. The decrease in the panel section has a significant effect on the maintenance section of the bridge. However, in most conventional designs, the influence of corrosion is overlooked during the calculation of the shear buckling strength and hence over-design is common. Therefore, in this study, a steel girder with an A/D of 1:1, as well as a 6-mm-, 16-mm-, and 12-mm-thick web panel, flange, and intermediate reinforcing material, respectively, were used. The total length was set to that (3200 mm) of the default model. The effect of corrosion shear buckling was investigated by determining the volume amount of corrosion, shape of the erosion patterns, and the angular change in the tensile field of the shear buckling strength. This study provides the basic data that will enable designs that incorporate values closer (than those used in most conventional designs) to the actual shear buckling strength.Keywords: corrosion, shear buckling strength, steel girder, shear strength
Procedia PDF Downloads 3755885 An Ethno-Scientific Approach for Restoration of South Indian Heritage Rice Varieties
Authors: A. Sathya, C. Manojkumar, D. Visithra
Abstract:
The South Indian peninsula has rich diversity of both heritage and conventional rice varieties. With the prime focus set on high yield and increased productivity, a number of traditional/heritage rice varieties have dwindled into the forgotten past. At present, in the face of climate change, the hybrids and conventional varieties struggle for sustainable yield. The need of copious irrigation and high nutrient inputs for the hybrids and conventional varieties have cornered the farming and research community to resort to heritage rice varieties for their sturdy survival capability. An ethno-scientific effort has been taken in the Cauvery delta tracts of South India to restore these traditional/heritage rice varieties. A closer field level performance evaluation under organic condition has been undertaken for 10 heritage rice varieties. The morpho-agronomic characterization across vegetative and reproductive stages have revealed a pattern of variation in duration, plant height, number of tillers, productive tillers, etc. The shortest duration was recorded for a variety with the vernacular name of ‘Arubadaam kuruvai’. A traditional rice variety called ‘Maapillai samba’ is claimed to impart instant energy. The supernatant water of the overnight soaked cooked rice of Maapillai samba is a source of instant energy. The physico-chemical analysis of this variety is being explored for its instant nutritional boosting ability. Wide spectrum of nutritional characters including palatability and marketability preferences has also been analyzed for all these 10 heritage rice varieties. A ‘Farmer’s harvest day festival’ was organized, providing opportunity for the ‘Cauvery delta farmers’ to identify the special features and exchange their views on these standing golden ripe paddy varieties directly. The airing of their ethnic knowledge pooled with interesting scientific investigations of these 10 rich heritage rice varieties of South India undertaken will be elaborately discussed enlightening the perspectives on the pathway of resurrection and restoration of this heritage of the past.Keywords: biodiversity, conservation, heritage, rice, traditional, varieties
Procedia PDF Downloads 4265884 Opinion Mining and Sentiment Analysis on DEFT
Authors: Najiba Ouled Omar, Azza Harbaoui, Henda Ben Ghezala
Abstract:
Current research practices sentiment analysis with a focus on social networks, DEfi Fouille de Texte (DEFT) (Text Mining Challenge) evaluation campaign focuses on opinion mining and sentiment analysis on social networks, especially social network Twitter. It aims to confront the systems produced by several teams from public and private research laboratories. DEFT offers participants the opportunity to work on regularly renewed themes and proposes to work on opinion mining in several editions. The purpose of this article is to scrutinize and analyze the works relating to opinions mining and sentiment analysis in the Twitter social network realized by DEFT. It examines the tasks proposed by the organizers of the challenge and the methods used by the participants.Keywords: opinion mining, sentiment analysis, emotion, polarity, annotation, OSEE, figurative language, DEFT, Twitter, Tweet
Procedia PDF Downloads 1395883 A Study of Human Communication in an Internet Community
Authors: Andrew Laghos
Abstract:
The Internet is a big part of our everyday lives. People can now access the internet from a variety of places including home, college, and work. Many airports, hotels, restaurants and cafeterias, provide free wireless internet to their visitors. Using technologies like computers, tablets, and mobile phones, we spend a lot of our time online getting entertained, getting informed, and communicating with each other. This study deals with the latter part, namely, human communication through the Internet. People can communicate with each other using social media, social network sites (SNS), e-mail, messengers, chatrooms, and so on. By connecting with each other they form virtual communities. Regarding SNS, types of connections that can be studied include friendships and cliques. Analyzing these connections is important to help us understand online user behavior. The method of Social Network Analysis (SNA) was used on a case study, and results revealed the existence of some useful patterns of interactivity between the participants. The study ends with implications of the results and ideas for future research.Keywords: human communication, internet communities, online user behavior, psychology
Procedia PDF Downloads 4975882 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 1675881 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 245880 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 2225879 Multi-Sender MAC Protocol Based on Temporal Reuse in Underwater Acoustic Networks
Authors: Dongwon Lee, Sunmyeng Kim
Abstract:
Underwater acoustic networks (UANs) have become a very active research area in recent years. Compared with wireless networks, UANs are characterized by the limited bandwidth, long propagation delay and high channel dynamic in acoustic modems, which pose challenges to the design of medium access control (MAC) protocol. The characteristics severely affect network performance. In this paper, we study a MS-MAC (Multi-Sender MAC) protocol in order to improve network performance. The proposed protocol exploits temporal reuse by learning the propagation delays to neighboring nodes. A source node locally calculates the transmission schedules of its neighboring nodes and itself based on the propagation delays to avoid collisions. Performance evaluation is conducted using simulation, and confirms that the proposed protocol significantly outperforms the previous protocol in terms of throughput.Keywords: acoustic channel, MAC, temporal reuse, UAN
Procedia PDF Downloads 3505878 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building
Authors: Abdul Hakim Chikho
Abstract:
Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.Keywords: earthquake, fundamental mode period, design, building
Procedia PDF Downloads 2845877 Direct In-Situ Ring Opening Polymerization of E-caprolactone to Produce Biodegradable PCL/Montmorillonite Nanocomposites
Authors: Amine Harrane, Mahmoud Belalia
Abstract:
During the last decade, polymer layered silicate nanocomposites have received increasing attention from scientists and industrial researchers because they generally exhibit greatly improved mechanical, thermal, barrier and flame-retardant properties at low clay content in comparison with unfilled polymers or more conventional micro composites. Poly(ε-caprolactone) (PCL)-layered silicate nanocomposites have the advantage of adding biocompatibility and biodegradability to the traditional properties of nanocomposites. They can be prepared by in situ ring-opening polymerization of ε-caprolactone using a conventional initiator to induce polymerization in the presence of an organophilic clay, such as organomodified montmorillonite. Messersmith and Giannelis used montmorillonite exchanged with protonated 12-amino dodecanoic acid and Cr3+ exchanged fluorohectorite, a synthetic mica type of silicate. Sn-based catalysts such as tin (II) octoate and dibutyltin (IV) dimethoxide have been reported to efficiently promote the polymerization of ε-caprolactone in the presence of organomodified clays. In this work, we have used an alternative method to prepare PCL/montmorillonite nanocomposites. The cationic polymerization of ε-caprolactone was initiated directly by Maghnite-TOA, organomodified montmorillonite clay, to produce nanocomposites (Scheme 1). Resulted from nanocomposites were characterized by X-ray diffraction (XRD), transmission electron microscopy (TEM), force atomic microscopy (AFM) and thermogravimetry.Keywords: polycaprolactone, polycaprolactone/clay nanocomposites, biodegradables nanocomposites, Maghnite, Insitu polymeriation
Procedia PDF Downloads 785876 Impact of Transportation on the Economic Growth of Nigeria
Authors: E. O. E. Nnadi
Abstract:
Transportation is a critical factor in the economic growth and development of any nation, region or state. Good transportation network supports every sector of the economy like the manufacturing, transportation and encourages investors thereby affect the overall economic prosperity. The paper evaluates the impact of transportation on the economic growth of Nigeria using south eastern states as a case study. The choice of the case study is its importance as the commercial and industrial nerve of the country. About 200 respondents who are of different professions such as dealers in goods, transporters, contractors, consultants, bankers were selected and a set of questionnaire were administered to using the systematic sampling technique in the five states of the region. Descriptive statistics and relative importance index (RII) technique was employed for the analysis of the data gathered. The findings of the analysis reveal that Nigeria has the least effective ratio per population in Africa of 949.91 km/Person. Conclusion was drawn to improve road network in the area and the country as a whole to enhance the economic activities of the people.Keywords: economic growth, south-east, transportation, transportation cost, Nigeria
Procedia PDF Downloads 2735875 Coal Mining Safety Monitoring Using Wsn
Authors: Somdatta Saha
Abstract:
The main purpose was to provide an implementable design scenario for underground coal mines using wireless sensor networks (WSNs). The main reason being that given the intricacies in the physical structure of a coal mine, only low power WSN nodes can produce accurate surveillance and accident detection data. The work mainly concentrated on designing and simulating various alternate scenarios for a typical mine and comparing them based on the obtained results to arrive at a final design. In the Era of embedded technology, the Zigbee protocols are used in more and more applications. Because of the rapid development of sensors, microcontrollers, and network technology, a reliable technological condition has been provided for our automatic real-time monitoring of coal mine. The underground system collects temperature, humidity and methane values of coal mine through sensor nodes in the mine; it also collects the number of personnel inside the mine with the help of an IR sensor, and then transmits the data to information processing terminal based on ARM.Keywords: ARM, embedded board, wireless sensor network (Zigbee)
Procedia PDF Downloads 3405874 A Study of Adult Lifelong Learning Consulting and Service System in Taiwan
Authors: Wan Jen Chang
Abstract:
Back ground: Taiwan's current adult lifelong learning services have expanded from vocational training to universal lifelong learning. However, both the professional knowledge training of learning guidance and consulting services and the provision of adult online learning consulting service systems still need to be established. Purpose: The purposes of this study are as follows: 1. Analyze the professional training mechanism for cultivating adult lifelong learning consultation and coaching; 2. Explore the feasibility of constructing a system that uses network technology to provide adult learning consultation services. Research design: This study conducts a literature analysis of counseling and coaching policy reports on lifelong learning in European countries and the United States. There are two focus discussions were conducted with 15 lifelong learning scholars, experts and practitioners as research subjects. The following two topics were discussed and suggested: 1. The current situation, needs and professional ability training mechanism of "Adult Lifelong Learning Consulting and Services"; 2. Strategies for establishing an "Adult Lifelong Learning Consulting and Service internet System". Conclusion: 1.Based on adult lifelong learning consulting and service needs, plan a professional knowledge training and certification system.2.Adult lifelong learning consulting and service professional knowledge and skills training should include the use of network technology to provide consulting service skills.3.To establish an adult lifelong learning consultation and service system, the Ministry of Education should promulgate policies and measures at the central level and entrust local governments or private organizations to implement them.4.The adult lifelong learning consulting and service system can combine the national qualifications framework, private sector and NPO to expand learning consulting service partners.Keywords: adult lifelong learning, profesional knowledge, consulting and service, network system
Procedia PDF Downloads 675873 Presenting a Job Scheduling Algorithm Based on Learning Automata in Computational Grid
Authors: Roshanak Khodabakhsh Jolfaei, Javad Akbari Torkestani
Abstract:
As a cooperative environment for problem-solving, it is necessary that grids develop efficient job scheduling patterns with regard to their goals, domains and structure. Since the Grid environments facilitate distributed calculations, job scheduling appears in the form of a critical problem for the management of Grid sources that influences severely on the efficiency for the whole Grid environment. Due to the existence of some specifications such as sources dynamicity and conditions of the network in Grid, some algorithm should be presented to be adjustable and scalable with increasing the network growth. For this purpose, in this paper a job scheduling algorithm has been presented on the basis of learning automata in computational Grid which the performance of its results were compared with FPSO algorithm (Fuzzy Particle Swarm Optimization algorithm) and GJS algorithm (Grid Job Scheduling algorithm). The obtained numerical results indicated the superiority of suggested algorithm in comparison with FPSO and GJS. In addition, the obtained results classified FPSO and GJS in the second and third position respectively after the mentioned algorithm.Keywords: computational grid, job scheduling, learning automata, dynamic scheduling
Procedia PDF Downloads 3435872 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127