Search results for: computer vision on embedded systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12309

Search results for: computer vision on embedded systems

12129 3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems

Authors: Ahmed Fradi

Abstract:

Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.

Keywords: CAD, 3D object retrieval, shape based retrieval, similarity calculation

Procedia PDF Downloads 238
12128 Preliminary Proposal to Use Adaptive Computer Games in the Virtual Rehabilitation Therapy

Authors: Mamoun S. Ideis, Zein Salah

Abstract:

Virtual Rehabilitation (VR) refers to using Virtual Reality’s hardware and simulations as means of exercising tools to rehabilitate patients in need. These patients will undergo their treatment exercises while playing different computer games, which helps achieve greater motivation for patients undergoing their therapeutic exercises. Virtual Rehabilitation systems adopt computer games as part of the treatment therapy. In this paper, we present a preliminary proposal to using adaptive computer games in Virtual Rehabilitation therapy. We also present some tips in designing those adaptive computer games by using different machine learning algorithms in order to create a personalized experience for each patient, which in turn, increases the potential benefits of the treatment that each patient receives. Furthermore, we propose a method of comparing the results of treatment using the adaptive computer games with the results of using static and classical computer games.

Keywords: virtual rehabilitation, physiotherapy, adaptive computer games, post-stroke, game design

Procedia PDF Downloads 269
12127 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 35
12126 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 46
12125 Real-Time Generative Architecture for Mesh and Texture

Authors: Xi Liu, Fan Yuan

Abstract:

In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.

Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics

Procedia PDF Downloads 38
12124 Causes of Blindness and Low Vision among Visually Impaired Population Supported by Welfare Organization in Ardabil Province in Iran

Authors: Mohammad Maeiyat, Ali Maeiyat Ivatlou, Rasul Fani Khiavi, Abouzar Maeiyat Ivatlou, Parya Maeiyat

Abstract:

Purpose: Considering the fact that visual impairment is still one of the countries health problem, this study was conducted to determine the causes of blindness and low vision in visually impaired membership of Ardabil Province welfare organization. Methods: The present study which was based on descriptive and national-census, that carried out in visually impaired population supported by welfare organization in all urban and rural areas of Ardabil Province in 2013 and Collection of samples lasted for 7 months. The subjects were inspected by optometrist to determine their visual status (blindness or low vision) and then referred to ophthalmologist in order to discover the main causes of visual impairment based on the international classification of diseases version 10. Statistical analysis of collected data was performed using SPSS software version 18. Results: Overall, 403 subjects with mean age of years participated in this study. 73.2% were blind, 26.8 % were low vision and according gender grouping 60.50 % of them were male, 39.50 % were female that divided into three groups with the age level of lower than 15 (11.2%) 15 to 49 (76.7%), and 50 and higher (12.1%). The age range was 1 to 78 years. The causes of blindness and low vision were in descending order: optic atrophy (18.4%), retinitis pigmentosa (16.8%), corneal diseases (12.4%), chorioretinal diseases (9.4%), cataract (8.9%), glaucoma (8.2%), phthisis bulbi (7.2%), degenerative myopia (6.9%), microphtalmos ( 4%), amblyopia (3.2%), albinism (2.5%) and nistagmus (2%). Conclusion: in this study the main causes of visual impairments were optic atrophy and retinitis pigmentosa, thus specific prevention plans can be effective in reducing the incidence of visual disabilities.

Keywords: blindness, low vision, welfare, ardabil

Procedia PDF Downloads 411
12123 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring

Authors: A. Degale Desta, Cheng Jian

Abstract:

Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.

Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning

Procedia PDF Downloads 127
12122 Material Failure Process Simulation by Improved Finite Elements with Embedded Discontinuities

Authors: Gelacio Juárez-Luna, Gustavo Ayala, Jaime Retama-Velasco

Abstract:

This paper shows the advantages of the material failure process simulation by improve finite elements with embedded discontinuities, using a new definition of traction vector, dependent on the discontinuity length and the angle. Particularly, two families of this kind of elements are compared: kinematically optimal symmetric and statically and kinematically optimal non-symmetric. The constitutive model to describe the behavior of the material in the symmetric formulation is a traction-displacement jump relationship equipped with softening after reaching the failure surface. To show the validity of this symmetric formulation, representative numerical examples illustrating the performance of the proposed formulation are presented. It is shown that the non-symmetric family may over or underestimate the energy required to create a discontinuity, as this effect is related with the total length of the discontinuity, fact that is not noticed when the discontinuity path is a straight line.

Keywords: variational formulation, strong discontinuity, embedded discontinuities, strain localization

Procedia PDF Downloads 757
12121 Embedded Digital Image System

Authors: Dawei Li, Cheng Liu, Yiteng Liu

Abstract:

This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application.

Keywords: ADV212, image system, JPEG2000, sounding rocket

Procedia PDF Downloads 397
12120 Double Layer Security Model for Identification Friend or Foe

Authors: Buse T. Aydın, Enver Ozdemir

Abstract:

In this study, a double layer authentication scheme between the aircraft and the Air Traffic Control (ATC) tower is designed to prevent any unauthorized aircraft from introducing themselves as friends. The method is a combination of classical cryptographic methods and new generation physical layers. The first layer has employed the embedded key of the aircraft. The embedded key is assumed to installed during the construction of the utility. The other layer is a physical attribute (flight path, distance, etc.) between the aircraft and the ATC tower. We create a mathematical model so that two layers’ information is employed and an aircraft is authenticated as a friend or foe according to the accuracy of the results of the model. The results of the aircraft are compared with the results of the ATC tower and if the values found by the aircraft and ATC tower match within a certain error margin, we mark the aircraft as a friend. In this method, even if embedded key is captured by the enemy aircraft, without the information of the second layer, the enemy can easily be determined. Overall, in this work, we present a more reliable system by adding a physical layer in the authentication process.

Keywords: ADS-B, communication with physical layer security, cryptography, identification friend or foe

Procedia PDF Downloads 132
12119 Three Dimensional Vibration Analysis of Carbon Nanotubes Embedded in Elastic Medium

Authors: M. Shaban, A. Alibeigloo

Abstract:

This paper studies free vibration behavior of single-walled carbon nanotubes (SWCNTs) embedded on elastic medium based on three-dimensional theory of elasticity. To accounting the size effect of carbon nanotubes, nonlocal theory is adopted to shell model. The nonlocal parameter is incorporated into all constitutive equations in three dimensions. The surrounding medium is modeled as two-parameter elastic foundation. By using Fourier series expansion in axial and circumferential direction, the set of coupled governing equations are reduced to the ordinary differential equations in thickness direction. Then, the state-space method as an efficient and accurate method is used to solve the resulting equations analytically. Comprehensive parametric studies are carried out to show the influences of the nonlocal parameter, radial and shear elastic stiffness, thickness-to-radius ratio and radius-to-length ratio.

Keywords: carbon nanotubes, embedded, nonlocal, free vibration

Procedia PDF Downloads 419
12118 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance

Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan

Abstract:

A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.

Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection

Procedia PDF Downloads 105
12117 Low-Cost Embedded Biometric System Based on Fingervein Modality

Authors: Randa Boukhris, Alima Damak, Dorra Sellami

Abstract:

Fingervein biometric authentication is one of the most popular and accurate technologies. However, low cost embedded solution is still an open problem. In this paper, a real-time implementation of fingervein recognition process embedded in Raspberry-Pi has been proposed. The use of Raspberry-Pi reduces overall system cost and size while allowing an easy user interface. Implementation of a target technology has guided to opt some specific parallel and simple processing algorithms. In the proposed system, we use four structural directional kernel elements for filtering finger vein images. Then, a Top-Hat and Bottom-Hat kernel filters are used to enhance the visibility and the appearance of venous images. For feature extraction step, a simple Local Directional Code (LDC) descriptor is applied. The proposed system presents an Error Equal Rate (EER) and Identification Rate (IR), respectively, equal to 0.02 and 98%. Furthermore, experimental results show that real-time operations have good performance.

Keywords: biometric, Bottom-Hat, Fingervein, LDC, Rasberry-Pi, ROI, Top-Hat

Procedia PDF Downloads 182
12116 Monocular Depth Estimation Benchmarking with Thermal Dataset

Authors: Ali Akyar, Osman Serdar Gedik

Abstract:

Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.

Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers

Procedia PDF Downloads 12
12115 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 338
12114 Evaluation and Analysis of ZigBee-Based Wireless Sensor Network: Home Monitoring as Case Study

Authors: Omojokun G. Aju, Adedayo O. Sule

Abstract:

ZigBee wireless sensor and control network is one of the most popularly deployed wireless technologies in recent years. This is because ZigBee is an open standard lightweight, low-cost, low-speed, low-power protocol that allows true operability between systems. It is built on existing IEEE 802.15.4 protocol and therefore combines the IEEE 802.15.4 features and newly added features to meet required functionalities thereby finding applications in wide variety of wireless networked systems. ZigBee‘s current focus is on embedded applications of general-purpose, inexpensive, self-organising networks which requires low to medium data rates, high number of nodes and very low power consumption such as home/industrial automation, embedded sensing, medical data collection, smart lighting, safety and security sensor networks, and monitoring systems. Although the ZigBee design specification includes security features to protect data communication confidentiality and integrity, however, when simplicity and low-cost are the goals, security is normally traded-off. A lot of researches have been carried out on ZigBee technology in which emphasis has mainly been placed on ZigBee network performance characteristics such as energy efficiency, throughput, robustness, packet delay and delivery ratio in different scenarios and applications. This paper investigate and analyse the data accuracy, network implementation difficulties and security challenges of ZigBee network applications in star-based and mesh-based topologies with emphases on its home monitoring application using the ZigBee ProBee ZE-10 development boards for the network setup. The paper also expose some factors that need to be considered when designing ZigBee network applications and suggest ways in which ZigBee network can be designed to provide more resilient to network attacks.

Keywords: home monitoring, IEEE 802.14.5, topology, wireless security, wireless sensor network (WSN), ZigBee

Procedia PDF Downloads 356
12113 Exploring Dynamics of Regional Creative Economy

Authors: Ari Lindeman, Melina Maunula, Jani Kiviranta, Ronja Pölkki

Abstract:

The aim of this paper is to build a vision of the utilization of creative industry competences in industrial and services firms connected to Kymenlaakso region, Finland, smart specialization focus areas. Research indicates that creativity and the use of creative industry’s inputs can enhance innovation and competitiveness. Currently creative methods and services are underutilized in regional businesses and the added value they provide is not well grasped. Methodologically, the research adopts a qualitative exploratory approach. Data is collected in multiple ways including a survey, focus groups, and interviews. Theoretically, the paper contributes to the discussion about the use creative industry competences in regional development, and argues for building regional creative economy ecosystems in close co-operation with regional strategies and traditional industries rather than as treating regional creative industry ecosystem initiatives separate from them. The practical contribution of the paper is the creative vision for the use of regional authorities in updating smart specialization strategy as well as boosting industrial and creative & cultural sectors’ competitiveness. The paper also illustrates a research-based model of vision building.

Keywords: business, cooperation, creative economy, regional development, vision

Procedia PDF Downloads 102
12112 Drastic Improvement in Vision Following Surgical Excision of Juvenile Nasopharyngeal Angiofibroma with Compressive Optic Neuropathy

Authors: Sweta Das

Abstract:

This case report is a 15-year-old male who presented with painless unilateral vision loss from left optic nerve compression due to juvenile nasopharyngeal angiofibroma. JNA is a rare, benign neoplasm that causes intracranial and intraorbital bone destruction and extends aggressively into surrounding soft tissues. It accounts for <1% of all head and neck tumors, is predominantly found in pediatric males and tends to affect indigenous population disproportionately. The most common presenting symptom for JNA is epistaxis and nasal obstruction. However, it can invade orbit, chiasm and pituitary gland, causing loss of vision and field. Visual acuity and function near normalized following surgical excision. Optometry plays an important role in the diagnosis and co-management of JNA with optic nerve compression by closely monitoring afferent optic nerve function and structure, and extraocular motility. Visual function and acuity in patients with short-term compressive neuropathy may drastically improve following surgical resection as this case demonstrates.

Keywords: orbital mass, painless monocular vision loss, compressive optic neuropathy, pediatric tumor

Procedia PDF Downloads 36
12111 Residual Stress Around Embedded Particles in Bulk YBa2Cu3Oy Samples

Authors: Anjela Koblischka-Veneva, Michael R. Koblischka

Abstract:

To increase the flux pinning performance of bulk YBa2Cu3O7-δ (YBCO or Y-123) superconductors, it is common to employ secondary phase particles, either Y2BaCuO5 (Y-211) particles created during the growth of the samples or additionally added (nano)particles of various types, embedded in the superconducting Y-123 matrix. As the crystallographic parameters of all the particles indicate a misfit to Y-123, there will be residual strain within the Y-123 matrix around such particles. With a dedicated analysis of electron backscatter diffraction (EBSD) data obtained on various bulk, Y-123 superconductor samples, the strain distribution around such embedded secondary phase particles can be revealed. The results obtained are presented in form of Kernel Average Misorientation (KAM) mappings. Around large Y-211 particles, the strain can be so large that YBCO subgrains are formed. Therefore, it is essential to properly control the particle size as well as their distribution within the bulk sample to obtain the best performance. The impact of the strain distribution on the flux pinning properties is discussed.

Keywords: Bulk superconductors, EBSD, Strain, YBa2Cu3Oy

Procedia PDF Downloads 120
12110 Classical Myths in Modern Drama: A Study of the Vision of Jean Anouilh in Antigone

Authors: Azza Taha Zaki

Abstract:

Modern drama was characterised by realism and naturalism as dominant literary movements that focused on contemporary people and their issues to reflect the status of modern man and his environment. However, some modern dramatists have often fallen on classical mythology in ancient Greek tragedies to create a sense of the universality of the human experience. The tragic overtones of classical myths have helped modern dramatists in their attempts to create an enduring piece by evoking the majestic grandeur of the ancient myths and the heroic struggle of man against forces he cannot fight. Myths have continued to appeal to modern playwrights not only for the plot and narrative material but also for the vision and insight into the human experience and human condition. This paper intends to study how the reworking of Sophocles’ Antigone by Jean Anouilh in his Antigone, written in 1942 at the height of the Second World War and during the German occupation of his country, France, fits his own purpose and his own time. The paper will also offer an analysis of the vision in both plays to show how Anouilh has used the classical Antigone freely to produce a modern vision of the dilemma of man when faced by personal and national conflicts.

Keywords: Anouilh, Antigone, drama, Greek tragedy, modern, myth, sophocles

Procedia PDF Downloads 158
12109 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 121
12108 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems

Authors: Borhan Marzougui

Abstract:

Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.

Keywords: IoT, DDoS, attacks, botnet, security, agents

Procedia PDF Downloads 118
12107 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 168
12106 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 80
12105 Simulation and Experimental Study on Tensile Force Measurement of PS Tendons Using an Embedded EM Sensor

Authors: ByoungJoon Yu, Junkyeong Kim, Seunghee Park

Abstract:

The tensile force estimation PS tendons is in great demand on monitoring the structural health condition of PSC girder bridges. Measuring the tensile force of the PS tendons inside the PSC girder using conventional methods is hard due to its location. In this paper, an embedded EM sensor based tensile force estimation of PS tendon was carried out by measuring the permeability of the PS tendons in PSC girder. The permeability is changed due to the induced tensile force by the magneto-elastic effect and the effect then lead to the gradient change of the B-H curve. An experiment was performed to obtain the signals from the EM sensor using three down-scaled PSC girder models. The permeability of PS tendons was proportionally decreased according to the increase of the tensile forces. To verify the experiment results, a simulation of tensile force estimation will be conducted in further study. Consequently, it is expected that both the experiment results and the simulation results increase the accuracy of the tensile force estimation, and then it could be one of the solutions for evaluating the performance of PSC girder.

Keywords: tensile force estimation, embedded EM sensor, PSC girder, EM sensor simulation, cross section loss

Procedia PDF Downloads 447
12104 Real-Time Compressive Strength Monitoring for NPP Concrete Construction Using an Embedded Piezoelectric Self-Sensing Technique

Authors: Junkyeong Kim, Seunghee Park, Ju-Won Kim, Myung-Sug Cho

Abstract:

Recently, demands for the construction of Nuclear Power Plants (NPP) using high strength concrete (HSC) has been increased. However, HSC might be susceptible to brittle fracture if the curing process is inadequate. To prevent unexpected collapse during and after the construction of HSC structures, it is essential to confirm the strength development of HSC during the curing process. However, several traditional strength-measuring methods are not effective and practical. In this study, a novel method to estimate the strength development of HSC based on electromechanical impedance (EMI) measurements using an embedded piezoelectric sensor is proposed. The EMI of NPP concrete specimen was tracked to monitor the strength development. In addition, cross-correlation coefficient was applied in sequence to examine the trend of the impedance variations more quantitatively. The results confirmed that the proposed technique can be applied successfully monitoring of the strength development during the curing process of HSC structures.

Keywords: concrete curing, embedded piezoelectric sensor, high strength concrete, nuclear power plant, self-sensing impedance

Procedia PDF Downloads 486
12103 Refined Edge Detection Network

Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni

Abstract:

Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.

Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone

Procedia PDF Downloads 71
12102 A Study on Design for Parallel Test Based on Embedded System

Authors: Zheng Sun, Weiwei Cui, Xiaodong Ma, Hongxin Jin, Dongpao Hong, Jinsong Yang, Jingyi Sun

Abstract:

With the improvement of the performance and complexity of modern equipment, automatic test system (ATS) becomes widely used for condition monitoring and fault diagnosis. However, the conventional ATS mainly works in a serial mode, and lacks the ability of testing several equipments at the same time. That leads to low test efficiency and ATS redundancy. Especially for a large majority of equipment under test, the conventional ATS cannot meet the requirement of efficient testing. To reduce the support resource and increase test efficiency, we propose a method of design for the parallel test based on the embedded system in this paper. Firstly, we put forward the general framework of the parallel test system, and the system contains a central management system (CMS) and several distributed test subsystems (DTS). Then we give a detailed design of the system. For the hardware of the system, we use embedded architecture to design DTS. For the software of the system, we use test program set to improve the test adaption. By deploying the parallel test system, the time to test five devices is now equal to the time to test one device in the past. Compared with the conventional test system, the proposed test system reduces the size and improves testing efficiency. This is of great significance for equipment to be put into operation swiftly. Finally, we take an industrial control system as an example to verify the effectiveness of the proposed method. The result shows that the method is reasonable, and the efficiency is improved up to 500%.

Keywords: parallel test, embedded system, automatic test system, automatic test system (ATS), central management system, central management system (CMS), distributed test subsystems, distributed test subsystems (DTS)

Procedia PDF Downloads 270
12101 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 89
12100 Emerging Cyber Threats and Cognitive Vulnerabilities: Cyberterrorism

Authors: Oludare Isaac Abiodun, Esther Omolara Abiodun

Abstract:

The purpose of this paper is to demonstrate that cyberterrorism is existing and poses a threat to computer security and national security. Nowadays, people have become excitedly dependent upon computers, phones, the Internet, and the Internet of things systems to share information, communicate, conduct a search, etc. However, these network systems are at risk from a different source that is known and unknown. These network systems risk being caused by some malicious individuals, groups, organizations, or governments, they take advantage of vulnerabilities in the computer system to hawk sensitive information from people, organizations, or governments. In doing so, they are engaging themselves in computer threats, crime, and terrorism, thereby making the use of computers insecure for others. The threat of cyberterrorism is of various forms and ranges from one country to another country. These threats include disrupting communications and information, stealing data, destroying data, leaking, and breaching data, interfering with messages and networks, and in some cases, demanding financial rewards for stolen data. Hence, this study identifies many ways that cyberterrorists utilize the Internet as a tool to advance their malicious mission, which negatively affects computer security and safety. One could identify causes for disparate anomaly behaviors and the theoretical, ideological, and current forms of the likelihood of cyberterrorism. Therefore, for a countermeasure, this paper proposes the use of previous and current computer security models as found in the literature to help in countering cyberterrorism

Keywords: cyberterrorism, computer security, information, internet, terrorism, threat, digital forensic solution

Procedia PDF Downloads 72