Search results for: Artificial Neural network
4338 Prediction of Extreme Precipitation in East Asia Using Complex Network
Authors: Feng Guolin, Gong Zhiqiang
Abstract:
In order to study the spatial structure and dynamical mechanism of extreme precipitation in East Asia, a corresponding climate network is constructed by employing the method of event synchronization. It is found that the area of East Asian summer extreme precipitation can be separated into two regions: one with high area weighted connectivity receiving heavy precipitation mostly during the active phase of the East Asian Summer Monsoon (EASM), and another one with low area weighted connectivity receiving heavy precipitation during both the active and the retreat phase of the EASM. Besides,a way for the prediction of extreme precipitation is also developed by constructing a directed climate networks. The simulation accuracy in East Asia is 58% with a 0-day lead, and the prediction accuracy is 21% and average 12% with a 1-day and an n-day (2≤n≤10) lead, respectively. Compare to the normal EASM year, the prediction accuracy is lower in a weak year and higher in a strong year, which is relevant to the differences in correlations and extreme precipitation rates in different EASM situations. Recognizing and identifying these effects is good for understanding and predicting extreme precipitation in East Asia.Keywords: synchronization, climate network, prediction, rainfall
Procedia PDF Downloads 4474337 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids
Authors: Niklas Panten, Eberhard Abele
Abstract:
This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control
Procedia PDF Downloads 2014336 Design of Circular Patch Antenna in Terahertz Band for Medical Applications
Authors: Moulfi Bouchra, Ferouani Souheyla, Ziani Kerarti Djalal, Moulessehoul Wassila
Abstract:
The wireless body network (WBAN) is the most interesting network these days and especially with the appearance of contagious illnesses such as covid 19, which require surveillance in the house. In this article, we have designed a circular microstrip antenna. Gold is the material used respectively for the patch and the ground plane and Gallium (εr=12.94) is chosen as the dielectric substrate. The dimensions of the antenna are 82.10*62.84 μm2 operating at a frequency of 3.85 THz. The proposed, designed antenna has a return loss of -46.046 dB and a gain of 3.74 dBi, and it can measure various physiological parameters and sensors that help in the overall monitoring of an individual's health condition.Keywords: circular patch antenna, Terahertz transmission, WBAN applications, real-time monitoring
Procedia PDF Downloads 3104335 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations
Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar
Abstract:
Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.Keywords: cache behaviour, network-on-chip, performance profiling, vectorization
Procedia PDF Downloads 2024334 Semirings of Graphs: An Approach Towards the Algebra of Graphs
Authors: Gete Umbrey, Saifur Rahman
Abstract:
Graphs are found to be most capable in computing, and its abstract structures have been applied in some specific computations and algorithms like in phase encoding controller, processor microcontroller, and synthesis of a CMOS switching network, etc. Being motivated by these works, we develop an independent approach to study semiring structures and various properties by defining the binary operations which in fact, seems analogous to an existing definition in some sense but with a different approach. This work emphasizes specifically on the construction of semigroup and semiring structures on the set of undirected graphs, and their properties are investigated therein. It is expected that the investigation done here may have some interesting applications in theoretical computer science, networking and decision making, and also on joining of two network systems.Keywords: graphs, join and union of graphs, semiring, weighted graphs
Procedia PDF Downloads 1534333 Budget Optimization for Maintenance of Bridges in Egypt
Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham
Abstract:
Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain
Procedia PDF Downloads 2934332 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 464331 KCBA, A Method for Feature Extraction of Colonoscopy Images
Authors: Vahid Bayrami Rad
Abstract:
In recent years, the use of artificial intelligence techniques, tools, and methods in processing medical images and health-related applications has been highlighted and a lot of research has been done in this regard. For example, colonoscopy and diagnosis of colon lesions are some cases in which the process of diagnosis of lesions can be improved by using image processing and artificial intelligence algorithms, which help doctors a lot. Due to the lack of accurate measurements and the variety of injuries in colonoscopy images, the process of diagnosing the type of lesions is a little difficult even for expert doctors. Therefore, by using different software and image processing, doctors can be helped to increase the accuracy of their observations and ultimately improve their diagnosis. Also, by using automatic methods, the process of diagnosing the type of disease can be improved. Therefore, in this paper, a deep learning framework called KCBA is proposed to classify colonoscopy lesions which are composed of several methods such as K-means clustering, a bag of features and deep auto-encoder. Finally, according to the experimental results, the proposed method's performance in classifying colonoscopy images is depicted considering the accuracy criterion.Keywords: colorectal cancer, colonoscopy, region of interest, narrow band imaging, texture analysis, bag of feature
Procedia PDF Downloads 594330 Managerial Advice-Seeking and Supply Chain Resilience: A Social Capital Perspective
Authors: Ethan Nikookar, Yalda Boroushaki, Larissa Statsenko, Jorge Ochoa Paniagua
Abstract:
Given the serious impact that supply chain disruptions can have on a firm's bottom-line performance, both industry and academia are interested in supply chain resilience, a capability of the supply chain that enables it to cope with disruptions. To date, much of the research has focused on the antecedents of supply chain resilience. This line of research has suggested various firm-level capabilities that are associated with greater supply chain resilience. A consensus has emerged among researchers that supply chain flexibility holds the greatest potential to create resilience. Supply chain flexibility achieves resilience by creating readiness to respond to disruptions with little cost and time by means of reconfiguring supply chain resources to mitigate the impacts of the disruption. Decisions related to supply chain disruptions are made by supply chain managers; however, the role played by supply chain managers' reference networks has been overlooked in the supply chain resilience literature. This study aims to understand the impact of supply chain managers on their firms' supply chain resilience. Drawing on social capital theory and social network theory, this paper proposes a conceptual model to explore the role of supply chain managers in developing the resilience of supply chains. Our model posits that higher level of supply chain managers' embeddedness in their reference network is associated with increased resilience of their firms' supply chain. A reference network includes individuals from whom supply chain managers seek advice on supply chain related matters. The relationships between supply chain managers' embeddedness in reference network and supply chain resilience are mediated by supply chain flexibility.Keywords: supply chain resilience, embeddedness, reference networks, social capitals
Procedia PDF Downloads 2324329 An Assessment of Drainage Network System in Nigeria Urban Areas using Geographical Information Systems: A Case Study of Bida, Niger State
Authors: Yusuf Hussaini Atulukwu, Daramola Japheth, Tabitit S. Tabiti, Daramola Elizabeth Lara
Abstract:
In view of the recent limitations faced by the township concerning poorly constructed and in some cases non - existence of drainage facilities that resulted into incessant flooding in some parts of the community poses threat to life,property and the environment. The research seeks to address this issue by showing the spatial distribution of drainage network in Bida Urban using Geographic information System techniques. Relevant features were extracted from existing Bida based Map using un-screen digitization and x, y, z, data of existing drainages were acquired using handheld Global Positioning System (GPS). These data were uploaded into ArcGIS 9.2, software, and stored in the relational database structure that was used to produce the spatial data drainage network of the township. The result revealed that about 40 % of the drainages are blocked with sand and refuse, 35 % water-logged as a result of building across erosion channels and dilapidated bridges as a result of lack of drainage along major roads. The study thus concluded that drainage network systems in Bida community are not in good working condition and urgent measures must be initiated in order to avoid future disasters especially with the raining season setting in. Based on the above findings, the study therefore recommends that people within the locality should avoid dumping municipal waste within the drainage path while sand blocked or weed blocked drains should be clear by the authority concerned. In the same vein the authority should ensured that contract of drainage construction be awarded to professionals and all the natural drainages caused by erosion should be addressed to avoid future disasters.Keywords: drainage network, spatial, digitization, relational database, waste
Procedia PDF Downloads 3384328 An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment
Authors: Paul Lam, Kevin Wong, Chi Him Chan
Abstract:
Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening.Keywords: artificial Intelligence, mobile learning, oral assessment, oral practice, speech-to-text function
Procedia PDF Downloads 1074327 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network
Procedia PDF Downloads 1344326 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 1474325 Outcome of Induction of Labour by Cervical Ripening with an Osmotic Dilator in a District General Hospital
Authors: A. Wahid Uddin
Abstract:
Osmotic dilator for cervical ripening bypasses the initial hormonal exposure necessary for a routine method of induction. The study was a clinical intervention with an osmotic dilator followed by prospective observation. The aim was to calculate the percentage of women who had successful cervical ripening using modified BISHOP score as evidenced by artificial rupture of membrane. The study also estimated the delivery interval following a single administration of osmotic dilators. Randomly selected patients booked for induction of labour accepting the intervention were included in the study. The study population comprised singleton term pregnancy, cephalic presentation, intact membranes with a modified BISHOP score of less than 6. Initial sample recruited was 30, but 6 patients left the study and the study was concluded on 24 patients. The data were collected in a pre-designed questionnaire and analysis were expressed in percentages along with using mean value for continuous variables. In 70 % of cases, artificial rupture of the membrane was possible and the mean time from insertion of the osmotic dilator to the delivery interval was 30 hours. The study concluded that an osmotic dilator could be a suitable alternative for hormone-based induction of labour.Keywords: dilator, induction, labour, osmotic
Procedia PDF Downloads 1404324 Transmission Line Protection Challenges under High Penetration of Renewable Energy Sources and Proposed Solutions: A Review
Authors: Melake Kuflom
Abstract:
European power networks involve the use of multiple overhead transmission lines to construct a highly duplicated system that delivers reliable and stable electrical energy to the distribution level. The transmission line protection applied in the existing GB transmission network are normally independent unit differential and time stepped distance protection schemes, referred to as main-1 & main-2 respectively, with overcurrent protection as a backup. The increasing penetration of renewable energy sources, commonly referred as “weak sources,” into the power network resulted in the decline of fault level. Traditionally, the fault level of the GB transmission network has been strong; hence the fault current contribution is more than sufficient to ensure the correct operation of the protection schemes. However, numerous conventional coal and nuclear generators have been or about to shut down due to the societal requirement for CO2 emission reduction, and this has resulted in a reduction in the fault level on some transmission lines, and therefore an adaptive transmission line protection is required. Generally, greater utilization of renewable energy sources generated from wind or direct solar energy results in a reduction of CO2 carbon emission and can increase the system security and reliability but reduces the fault level, which has an adverse effect on protection. Consequently, the effectiveness of conventional protection schemes under low fault levels needs to be reviewed, particularly for future GB transmission network operating scenarios. The proposed paper will evaluate the transmission line challenges under high penetration of renewable energy sources andprovides alternative viable protection solutions based on the problem observed. The paper will consider the assessment ofrenewable energy sources (RES) based on a fully rated converter technology. The DIgSILENT Power Factory software tool will be used to model the network.Keywords: fault level, protection schemes, relay settings, relay coordination, renewable energy sources
Procedia PDF Downloads 2124323 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 1674322 Optimum Tuning Capacitors for Wireless Charging of Electric Vehicles Considering Variation in Coil Distances
Authors: Muhammad Abdullah Arafat, Nahrin Nowrose
Abstract:
Wireless charging of electric vehicles is becoming more and more attractive as large amount of power can now be transferred to a reasonable distance using magnetic resonance coupling method. However, proper tuning of the compensation network is required to achieve maximum power transmission. Due to the variation of coil distance from the nominal value as a result of change in tire condition, change in weight or uneven road condition, the tuning of the compensation network has become challenging. In this paper, a tuning method has been described to determine the optimum values of the compensation network in order to maximize the average output power. The simulation results show that 5.2 percent increase in average output power is obtained for 10 percent variation in coupling coefficient using the optimum values without the need of additional space and electro-mechanical components. The proposed method is applicable to both static and dynamic charging of electric vehicles.Keywords: coupling coefficient, electric vehicles, magnetic resonance coupling, tuning capacitor, wireless power transfer
Procedia PDF Downloads 2034321 A Low Power Consumption Routing Protocol Based on a Meta-Heuristics
Authors: Kaddi Mohammed, Benahmed Khelifa D. Benatiallah
Abstract:
A sensor network consists of a large number of sensors deployed in areas to monitor and communicate with each other through a wireless medium. The collected routing data in the network consumes most of the energy of the sensor nodes. For this purpose, multiple routing approaches have been proposed to conserve energy resource at the sensors and to overcome the challenges of its limitation. In this work, we propose a new low energy consumption routing protocol for wireless sensor networks based on a meta-heuristic methods. Our protocol is to operate more fairly energy when routing captured data to the base station.Keywords: WSN, routing, energy, heuristic
Procedia PDF Downloads 3474320 Chinese Undergraduates’ Trust in And Usage of Machine Translation: A Survey
Authors: Bi Zhao
Abstract:
Neural network technology has greatly improved the output of machine translation in terms of both fluency and accuracy, which greatly increases its appeal for young users. The present exploratory study aims to find out how the Chinese undergraduates perceive and use machine translation in their daily life. A survey is conducted to collect data from 100 undergraduate students from multiple Chinese universities and with varied academic backgrounds, including arts, business, science, engineering, and medicine. The survey questions inquire about their use (including frequency, scenarios, purposes, and preferences) of and attitudes (including trust, quality assessment, justifications, and ethics) toward machine translation. Interviews and tasks of evaluating machine translation output are also employed in combination with the survey on a sample of selected respondents. The results indicate that Chinese undergraduate students use machine translation on a daily basis for a wide range of purposes in academic, communicative, and entertainment scenarios. Most of them have preferred machine translation tools, but the availability of machine translation tools within a certain scenario, such as the embedded machine translation tool on the webpage, is also the determining factor in their choice. The results also reveal that despite the reportedly limited trust in the accuracy of machine translation output, most students lack the ability to critically analyze and evaluate such output. Furthermore, the evidence is revealed of the inadequate awareness of ethical responsibility as machine translation users among Chinese undergraduate students.Keywords: Chinese undergraduates, machine translation, trust, usage
Procedia PDF Downloads 1434319 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network
Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi
Abstract:
Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication
Procedia PDF Downloads 4554318 Subjective Quality Assessment for Impaired Videos with Varying Spatial and Temporal Information
Authors: Muhammad Rehan Usman, Muhammad Arslan Usman, Soo Young Shin
Abstract:
The new era of digital communication has brought up many challenges that network operators need to overcome. The high demand of mobile data rates require improved networks, which is a challenge for the operators in terms of maintaining the quality of experience (QoE) for their consumers. In live video transmission, there is a sheer need for live surveillance of the videos in order to maintain the quality of the network. For this purpose objective algorithms are employed to monitor the quality of the videos that are transmitted over a network. In order to test these objective algorithms, subjective quality assessment of the streamed videos is required, as the human eye is the best source of perceptual assessment. In this paper we have conducted subjective evaluation of videos with varying spatial and temporal impairments. These videos were impaired with frame freezing distortions so that the impact of frame freezing on the quality of experience could be studied. We present subjective Mean Opinion Score (MOS) for these videos that can be used for fine tuning the objective algorithms for video quality assessment.Keywords: frame freezing, mean opinion score, objective assessment, subjective evaluation
Procedia PDF Downloads 4984317 Comprehensive Evaluation of COVID-19 Through Chest Images
Authors: Parisa Mansour
Abstract:
The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT
Procedia PDF Downloads 624316 A Multi-Agent System for Accelerating the Delivery Process of Clinical Diagnostic Laboratory Results Using GSM Technology
Authors: Ayman M. Mansour, Bilal Hawashin, Hesham Alsalem
Abstract:
Faster delivery of laboratory test results is one of the most noticeable signs of good laboratory service and is often used as a key performance indicator of laboratory performance. Despite the availability of technology, the delivery time of clinical laboratory test results continues to be a cause of customer dissatisfaction which makes patients feel frustrated and they became careless to get their laboratory test results. The Medical Clinical Laboratory test results are highly sensitive and could harm patients especially with the severe case if they deliver in wrong time. Such results affect the treatment done by physicians if arrived at correct time efforts should, therefore, be made to ensure faster delivery of lab test results by utilizing new trusted, Robust and fast system. In this paper, we proposed a distributed Multi-Agent System to enhance and faster the process of laboratory test results delivery using SMS. The developed system relies on SMS messages because of the wide availability of GSM network comparing to the other network. The software provides the capability of knowledge sharing between different units and different laboratory medical centers. The system was built using java programming. To implement the proposed system we had many possible techniques. One of these is to use the peer-to-peer (P2P) model, where all the peers are treated equally and the service is distributed among all the peers of the network. However, for the pure P2P model, it is difficult to maintain the coherence of the network, discover new peers and ensure security. Also, security is a quite important issue since each node is allowed to join the network without any control mechanism. We thus take the hybrid P2P model, a model between the Client/Server model and the pure P2P model using GSM technology through SMS messages. This model satisfies our need. A GUI has been developed to provide the laboratory staff with the simple and easy way to interact with the system. This system provides quick response rate and the decision is faster than the manual methods. This will save patients life.Keywords: multi-agent system, delivery process, GSM technology, clinical laboratory results
Procedia PDF Downloads 2534315 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 1134314 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 1014313 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study
Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple
Abstract:
There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection
Procedia PDF Downloads 1624312 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm
Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio
Abstract:
The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.Keywords: algorithm, CoAP, DoS, IoT, machine learning
Procedia PDF Downloads 844311 Spontaneous Message Detection of Annoying Situation in Community Networks Using Mining Algorithm
Authors: P. Senthil Kumari
Abstract:
Main concerns in data mining investigation are social controls of data mining for handling ambiguity, noise, or incompleteness on text data. We describe an innovative approach for unplanned text data detection of community networks achieved by classification mechanism. In a tangible domain claim with humble secrecy backgrounds provided by community network for evading annoying content is presented on consumer message partition. To avoid this, mining methodology provides the capability to unswervingly switch the messages and similarly recover the superiority of ordering. Here we designated learning-centered mining approaches with pre-processing technique to complete this effort. Our involvement of work compact with rule-based personalization for automatic text categorization which was appropriate in many dissimilar frameworks and offers tolerance value for permits the background of comments conferring to a variety of conditions associated with the policy or rule arrangements processed by learning algorithm. Remarkably, we find that the choice of classifier has predicted the class labels for control of the inadequate documents on community network with great value of effect.Keywords: text mining, data classification, community network, learning algorithm
Procedia PDF Downloads 5134310 Analysis of Decentralized on Demand Cross Layer in Cognitive Radio Ad Hoc Network
Authors: A. Sri Janani, K. Immanuel Arokia James
Abstract:
Cognitive radio ad hoc networks different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. Cognitive radio automatically detects available channels in wireless spectrum. This is a form of dynamic spectrum management. Cross-layer optimization is proposed, using this can allow far away secondary users can also involve into channel work. So it can increase the throughput and it will overcome the collision and time delay.Keywords: cognitive radio, cross layer optimization, CR mesh network, heterogeneous spectrum, mesh topology, random routing optimization technique
Procedia PDF Downloads 3624309 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration
Authors: Fateme Aysin Anka, Farzad Kiani
Abstract:
Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence
Procedia PDF Downloads 89