Search results for: normalized architectures
631 Glaucoma Detection in Retinal Tomography Using the Vision Transformer
Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan
Abstract:
Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning
Procedia PDF Downloads 191630 Research on Architectural Steel Structure Design Based on BIM
Authors: Tianyu Gao
Abstract:
Digital architectures use computer-aided design, programming, simulation, and imaging to create virtual forms and physical structures. Today's customers want to know more about their buildings. They want an automatic thermostat to learn their behavior and contact them, such as the doors and windows they want to open with a mobile app. Therefore, the architectural display form is more closely related to the customer's experience. Based on the purpose of building informationization, this paper studies the steel structure design based on BIM. Taking the Zigan office building in Hangzhou as an example, it is divided into four parts, namely, the digital design modulus of the steel structure, the node analysis of the steel structure, the digital production and construction of the steel structure. Through the application of BIM software, the architectural design can be synergized, and the building components can be informationized. Not only can the architectural design be feedback in the early stage, but also the stability of the construction can be guaranteed. In this way, the monitoring of the entire life cycle of the building and the meeting of customer needs can be realized.Keywords: digital architectures, BIM, steel structure, architectural design
Procedia PDF Downloads 195629 Parallel Random Number Generation for the Modern Supercomputer Architectures
Authors: Roman Snytsar
Abstract:
Pseudo-random numbers are often used in scientific computing such as the Monte Carlo Simulations or the Quantum Inspired Optimization. Requirements for a parallel random number generator running in the modern multi-core vector environment are more stringent than those for sequential random number generators. As well as passing the usual quality tests, the output of the parallel random number generator must be verifiable and reproducible throughout the concurrent execution. We propose a family of vectorized Permuted Congruential Generators. Implementations are available for multiple modern vector modern computer architectures. Besides demonstrating good single core performance, the generators scale easily across many processor cores and multiple distributed nodes. We provide performance and parallel speedup analysis and comparisons between the implementations.Keywords: pseudo-random numbers, quantum optimization, SIMD, parallel computing
Procedia PDF Downloads 120628 Performance, Scalability and Reliability Engineering: Shift Left and Shift Right Approach
Authors: Jyothirmayee Pola
Abstract:
Ideally, a test-driven development (TDD) or agile or any other process should be able to define and implement performance, scalability, and reliability (PSR) of the product with a higher quality of service (QOS) and should have the ability to fix any PSR issues with lesser cost before it hits the production. Most PSR test strategies for new product introduction (NPI) include assumptions about production load requirements but never accurate. NPE (New product Enhancement) include assumptions for new features that are being developed whilst workload distribution for older features can be derived by analyzing production transactions. This paper talks about how to shift left PSR towards design phase of release management process to get better QOS w.r.t PSR for any product under development. It also explains the ROI for future customer onboarding both for Service Oriented Architectures (SOA) and Microservices architectures and how to define PSR requirements.Keywords: component PSR, performance engineering, performance tuning, reliability, return on investment, scalability, system PSR
Procedia PDF Downloads 75627 Analyzing Land use change and its impacts on the Urban Environment in a Fast Growing Metropolitan City of Pakistan
Authors: Muhammad Nasar-u-Minallah, Dagmar Haase, Salman Qureshi
Abstract:
In a rapidly growing developing country cities are becoming more urbanized leading to modifications in urban climate. Rapid urbanization, especially unplanned urban land expansion, together with climate change has a profound impact on the urban settlement and urban thermal environment. Cities, particularly Pakistan are facing remarkably environmental issues and uneven development, and thus it is important to strengthen the investigation of urban environmental pressure brought by land-use changes and urbanization. The present study investigated the long term modification of the urban environment by urbanization utilizing Spatio-temporal dynamics of land-use change, urban population data, urban heat islands, monthly maximum, and minimum temperature of thirty years, multi remote sensing imageries, and spectral indices such as Normalized Difference Built-up Index and Normalized Difference Vegetation Index. The results indicate rapid growth in an urban built-up area and a reduction in vegetation cover in the last three decades (1990-2020). A positive correlation between urban heat islands and Normalized Difference Built-up Index, whereas a negative correlation between urban heat islands and the Normalized Difference Vegetation Index clearly shows how urbanization is affecting the local environment. The increase in air and land surface temperature temperatures is dangerous to human comfort. Practical approaches, such as increasing the urban green spaces and proper planning of the cities, have been suggested to help prevent further modification of the urban thermal environment by urbanization. The findings of this work are thus important for multi-sectorial use in the cities of Pakistan. By taking into consideration these results, the urban planners, decision-makers, and local government can make different policies to mitigate the urban land use impacts on the urban thermal environment in Pakistan.Keywords: land use, urban environment, local climate, Lahore
Procedia PDF Downloads 111626 Estimation of Shear Wave Velocity from Cone Penetration Test for Structured Busan Clays
Authors: Vinod K. Singh, S. G. Chung
Abstract:
The degree of structuration of Busan clays at the mouth of Nakdong River mouth was highly influenced by the depositional environment, i.e., flow of the river stream, marine regression, and transgression during the sedimentation process. As a result, the geotechnical properties also varies along the depth with change in degree of structuration. Thus, the in-situ tests such as cone penetration test (CPT) could not be used to predict various geotechnical properties properly by using the conventional empirical methods. In this paper, the shear wave velocity (Vs) was measured from the field using the seismic dilatometer. The Vs was also measured in the laboratory from high quality undisturbed and remolded samples using bender element method to evaluate the degree of structuration. The degree of structuration was quantitatively defined by the modulus ratio of undisturbed to remolded soil samples which is found well correlated with the normalized void ratio (e0/eL) where eL is the void ratio at the liquid limit. It is revealed that the empirical method based on laboratory results incorporating e0/eL can predict Vs from the field more accurately. Thereafter, the CPT based empirical method was developed to estimate the shear wave velocity taking the effect of structuration in the consideration. The developed method was found to predict shear wave velocity reasonably for Busan clays.Keywords: level of structuration, normalized modulus, normalized void ratio, shear wave velocity, site characterization
Procedia PDF Downloads 235625 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 270624 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea
Authors: L. I. Izhar, T. Stathaki, K. Howell
Abstract:
Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging
Procedia PDF Downloads 310623 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions
Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez
Abstract:
In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval
Procedia PDF Downloads 232622 Evolution under Length Constraints for Convolutional Neural Networks Architecture Design
Authors: Ousmane Youme, Jean Marie Dembele, Eugene Ezin, Christophe Cambier
Abstract:
In recent years, the convolutional neural networks (CNN) architectures designed by evolution algorithms have proven to be competitive with handcrafted architectures designed by experts. However, these algorithms need a lot of computational power, which is beyond the capabilities of most researchers and engineers. To overcome this problem, we propose an evolution architecture under length constraints. It consists of two algorithms: a search length strategy to find an optimal space and a search architecture strategy based on a genetic algorithm to find the best individual in the optimal space. Our algorithms drastically reduce resource costs and also keep good performance. On the Cifar-10 dataset, our framework presents outstanding performance with an error rate of 5.12% and only 4.6 GPU a day to converge to the optimal individual -22 GPU a day less than the lowest cost automatic evolutionary algorithm in the peer competition.Keywords: CNN architecture, genetic algorithm, evolution algorithm, length constraints
Procedia PDF Downloads 128621 Blockchain’s Feasibility in Military Data Networks
Authors: Brenden M. Shutt, Lubjana Beshaj, Paul L. Goethals, Ambrose Kam
Abstract:
Communication security is of particular interest to military data networks. A relatively novel approach to network security is blockchain, a cryptographically secured distribution ledger with a decentralized consensus mechanism for data transaction processing. Recent advances in blockchain technology have proposed new techniques for both data validation and trust management, as well as different frameworks for managing dataflow. The purpose of this work is to test the feasibility of different blockchain architectures as applied to military command and control networks. Various architectures are tested through discrete-event simulation and the feasibility is determined based upon a blockchain design’s ability to maintain long-term stable performance at industry standards of throughput, network latency, and security. This work proposes a consortium blockchain architecture with a computationally inexpensive consensus mechanism, one that leverages a Proof-of-Identity (PoI) concept and a reputation management mechanism.Keywords: blockchain, consensus mechanism, discrete-event simulation, fog computing
Procedia PDF Downloads 138620 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario
Authors: Sarita Agarwal, Deepika Delsa Dean
Abstract:
Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation
Procedia PDF Downloads 131619 Clothes Identification Using Inception ResNet V2 and MobileNet V2
Authors: Subodh Chandra Shakya, Badal Shrestha, Suni Thapa, Ashutosh Chauhan, Saugat Adhikari
Abstract:
To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system.Keywords: inception ResNet, convolutional neural net, deep learning, confusion matrix, data augmentation, data preprocessing
Procedia PDF Downloads 187618 The Effect on Lead Times When Normalizing a Supply Chain Process
Authors: Bassam Istanbouli
Abstract:
Organizations are living in a very competitive and dynamic environment which is constantly changing. In order to achieve a high level of service, the products and processes of these organizations need to be flexible and evolvable. If the supply chains are not modular and well designed, changes can bring combinatorial effects to most areas of a company from its management, financial, documentation, logistics and its information structure. Applying the normalized system’s concept to segments of the supply chain may help in reducing those ripple effects, but it may also increase lead times. Lead times are important and can become a decisive element in gaining customers. Industries are always under the pressure in providing good quality products, at competitive prices, when and how the customer wants them. Most of the time, the customers want their orders now, if not yesterday. The above concept will be proven by examining lead times in a manufacturing example before and after applying normalized systems concept to that segment of the chain. We will then show that although we can minimize the combinatorial effects when changes occur, the lead times will be increased.Keywords: supply chain, lead time, normalization, modular
Procedia PDF Downloads 125617 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models
Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti
Abstract:
This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm
Procedia PDF Downloads 412616 Performance Evaluation of Task Scheduling Algorithm on LCQ Network
Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad
Abstract:
The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear type of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.Keywords: dynamic algorithm, load imbalance, mapping, task scheduling
Procedia PDF Downloads 451615 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea
Authors: Arafan Traore, Teiji Watanabe
Abstract:
Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change
Procedia PDF Downloads 247614 Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases
Authors: Bandhan Dey, Muhsina Bintoon Yiasha, Gulam Sulaman Choudhury
Abstract:
Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%.Keywords: deep learning, image classification, X-ray images, Tensorflow, Keras, chest diseases, convolutional neural networks, multi-classification
Procedia PDF Downloads 92613 Studying Relationship between Local Geometry of Decision Boundary with Network Complexity for Robustness Analysis with Adversarial Perturbations
Authors: Tushar K. Routh
Abstract:
If inputs are engineered in certain manners, they can influence deep neural networks’ (DNN) performances by facilitating misclassifications, a phenomenon well-known as adversarial attacks that question networks’ vulnerability. Recent studies have unfolded the relationship between vulnerability of such networks with their complexity. In this paper, the distinctive influence of additional convolutional layers at the decision boundaries of several DNN architectures was investigated. Here, to engineer inputs from widely known image datasets like MNIST, Fashion MNIST, and Cifar 10, we have exercised One Step Spectral Attack (OSSA) and Fast Gradient Method (FGM) techniques. The aftermaths of adding layers to the robustness of the architectures have been analyzed. For reasoning, separation width from linear class partitions and local geometry (curvature) near the decision boundary have been examined. The result reveals that model complexity has significant roles in adjusting relative distances from margins, as well as the local features of decision boundaries, which impact robustness.Keywords: DNN robustness, decision boundary, local curvature, network complexity
Procedia PDF Downloads 75612 A New Sign Subband Adaptive Filter Based on Dynamic Selection of Subbands
Authors: Mohammad Shams Esfand Abadi, Mehrdad Zalaghi, Reza ebrahimpour
Abstract:
In this paper, we propose a sign adaptive filter algorithm with the ability of dynamic selection of subband filters which leads to low computational complexity compared with conventional sign subband adaptive filter (SSAF) algorithm. Dynamic selection criterion is based on largest reduction of the mean square deviation at each adaption. We demonstrate that this simple proposed algorithm has the same performance of the conventional SSAF and somewhat faster than it. In the presence of impulsive interferences robustness of the simple proposed algorithm as well as the conventional SSAF and outperform the conventional normalized subband adaptive filter (NSAF) algorithm. Therefore, it is preferred for environments under impulsive interferences. Simulation results are presented to verify these above considerations very well have been achieved.Keywords: acoustic echo cancellation (AEC), normalized subband adaptive filter (NSAF), dynamic selection subband adaptive filter (DS-NSAF), sign subband adaptive filter (SSAF), impulsive noise, robust filtering
Procedia PDF Downloads 599611 Smartphone Based Wound Assessment System for Diabetes Patients
Authors: Vaibhav V. Dixit, Shubham Ajay Karwa
Abstract:
Diabetic foot ulcers speak to a critical medical problem. Right now, clinicians and medical caretakers primarily construct their injury evaluation in light of visual examination of wound size and mending status, while the patients themselves rarely have a chance to play a dynamic part. Henceforth, love quantitative and practical examination technique that empowers the patients and their parental figures to take a more dynamic part in every day wound care possibly can quicken wound recuperating, spare travel cost and diminish human services costs. Considering the commonness of cell phones with a high-determination computerized camera, evaluating wounds by breaking down pictures of ceaseless foot ulcers is an alluring choice. In this paper, we propose a novel injury picture examination framework actualized using feature extraction and color segmentation. Here we are using the Normalized minimum distance classifier for classifying the output.Keywords: diabetic, Gabor wavelet, normalized minimum distance classifier, quantiable parameters
Procedia PDF Downloads 270610 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 135609 Stability and Performance Improvement of a Two-Degree-of-Freedom Robot under Interaction Using the Impedance Control
Authors: Seyed Reza Mirdehghan, Mohammad Reza Haeri Yazdi
Abstract:
In this paper, the stability and the performance of a two-degree-of-freedom robot under an interaction with a unknown environment has been investigated. The time when the robot returns to its initial position after an interaction and the primary resistance of the robot against the impact must be reduced. Thus, the applied torque on the motor will be reduced. The impedance control is an appropriate method for robot control in these conditions. The stability of the robot at interaction moment was transformed to be a robust stability problem. The dynamic of the unknown environment was modeled as a weight function and the stability of the robot under an interaction with the environment has been investigated using the robust control concept. To improve the performance of the system, a force controller has been designed which the normalized impedance after interaction has been reduced. The resistance of the robot has been considered as a normalized cost function and its value was 0.593. The results has showed reduction of resistance of the robot against impact and the reduction of convergence time by lower than one second.Keywords: impedance control, control system, robots, interaction
Procedia PDF Downloads 430608 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 309607 Survey of Communication Technologies for IoT Deployments in Developing Regions
Authors: Namugenyi Ephrance Eunice, Julianne Sansa Otim, Marco Zennaro, Stephen D. Wolthusen
Abstract:
The Internet of Things (IoT) is a network of connected data processing devices, mechanical and digital machinery, items, animals, or people that may send data across a network without requiring human-to-human or human-to-computer interaction. Each component has sensors that can pick up on specific phenomena, as well as processing software and other technologies that can link to and communicate with other systems and/or devices over the Internet or other communication networks and exchange data with them. IoT is increasingly being used in fields other than consumer electronics, such as public safety, emergency response, industrial automation, autonomous vehicles, the Internet of Medical Things (IoMT), and general environmental monitoring. Consumer-based IoT applications, like smart home gadgets and wearables, are also becoming more prevalent. This paper presents the main IoT deployment areas for environmental monitoring in developing regions and the backhaul options suitable for them. A detailed review of each of the list of papers selected for the study is included in section III of this document. The study includes an overview of existing IoT deployments, the underlying communication architectures, protocols, and technologies that support them. This overview shows that Low Power Wireless Area Networks (LPWANs), as summarized in Table 1, are very well suited for monitoring environment architectures designed for remote locations. LoRa technology, particularly the LoRaWAN protocol, has an advantage over other technologies due to its low power consumption, adaptability, and suitable communication range. The prevailing challenges of the different architectures are discussed and summarized in Table 3 of the IV section, where the main problem is the obstruction of communication paths by buildings, trees, hills, etc.Keywords: communication technologies, environmental monitoring, Internet of Things, IoT deployment challenges
Procedia PDF Downloads 85606 Experimental Analysis of Advanced Multi-Axial Preforms Conformability to Complex Contours
Authors: Andrew Hardman, Alistair T. McIlhagger, Edward Archer
Abstract:
A degree of research has been undertaken in the determination of 3D textile preforms behaviour to compression with direct comparison to 2D counterparts. Multiscale simulations have been developed to try and accurately analyse the behaviour of varying architectures post-consolidation. However, further understanding is required to experimentally identify the mechanisms and deformations that exist upon conforming to a complex contour. Due to the complexity of 3D textile preforms, determination of yarn behaviour to a complex contour is assessed through consolidation by means of vacuum assisted resin transfer moulding (VARTM), and the resulting mechanisms are investigated by micrograph analysis. Varying architectures; with known areal densities, pic density and thicknesses are assessed for a cohesive study. The resulting performance of each is assessed qualitatively as well as quantitatively from the perspective of material in terms of the change in representative unit cell (RVE) across the curved beam contour, in crimp percentage, tow angle, resin rich areas and binder distortion. A novel textile is developed from the resulting analysis to overcome the observed deformations.Keywords: comformability, compression, binder architecture, 3D weaving, textile preform
Procedia PDF Downloads 166605 Integrating Distributed Architectures in Highly Modular Reinforcement Learning Libraries
Authors: Albert Bou, Sebastian Dittert, Gianni de Fabritiis
Abstract:
Advancing reinforcement learning (RL) requires tools that are flexible enough to easily prototype new methods while avoiding impractically slow experimental turnaround times. To match the first requirement, the most popular RL libraries advocate for highly modular agent composability, which facilitates experimentation and development. To solve challenging environments within reasonable time frames, scaling RL to large sampling and computing resources has proved a successful strategy. However, this capability has been so far difficult to combine with modularity. In this work, we explore design choices to allow agent composability both at a local and distributed level of execution. We propose a versatile approach that allows the definition of RL agents at different scales through independent, reusable components. We demonstrate experimentally that our design choices allow us to reproduce classical benchmarks, explore multiple distributed architectures, and solve novel and complex environments while giving full control to the user in the agent definition and training scheme definition. We believe this work can provide useful insights to the next generation of RL libraries.Keywords: deep reinforcement learning, Python, PyTorch, distributed training, modularity, library
Procedia PDF Downloads 83604 How to Evaluate Resting and Walking Energy Expenditures of Individuals with Different Body Mass Index
Authors: Zeynep Altinkaya, Ugur Dal, Figen Dag, Dilan D. Koyuncu, Merve Turkegun
Abstract:
Obesity is defined as abnormal fat-tissue accumulation as a result of imbalance between energy intake and expenditure. Since 50-70% daily energy expenditure of sedantary individuals is consumed as resting energy expenditure (REE), it takes an important place in the evaluation of new methods for obesity treatment. Also, it is known that walking is a prevalent activity in the prevention of obesity. The primary purpose of this study is to evaluate and compare the resting and walking energy expenditures of individuals with different body mass index (BMI). In this research, 4 groups are formed as underweight (BMI < 18,5 kg/m2), normal (BMI=18,5-24,9 kg/m2), overweight (BMI=25-29,9 kg/m2), and obese (BMI ≥ 30) according to BMI of individuals. 64 healthy young adults (8 man and 8 woman per group, age 18-30 years) with no known gait disabilities were recruited in this study. The body compositions of all participants were measured via bioelectric empedance analysis method. The energy expenditure of individuals was measured with indirect calorimeter method as inspired and expired gas samples are collected breath-by-breath through a special facemask. The preferred walking speed (PWS) of each subject was determined by using infrared sensors placed in 2nd and 12th meters of 14 m walkway. The REE was measured for 15 min while subjects were lying, and walking energy expenditure was measured during subjects walk in their PWS on treadmill. The gross REE was significantly higher in obese subjects compared to underweight and normal subjects (p < 0,0001). When REE was normalized to body weight, it was higher in underweight and normal groups than overweight and obese groups (p < 0,0001). However, when REE was normalized to fat-free mass, it did not differ significantly between groups. The gross walking energy expenditure in PWS was higher in obese and overweight groups than underweight and normal groups (p < 0,0001). The regression coefficient between gross walking energy expenditure and body weight was significiant among normal and obese groups (p < 0.05). It accounted for 70,5% of gross walking energy expenditure in normal group, and 57,9% of gross walking energy expenditure in obese group. It is known that obese individuals have more metabolically inactive fat-tissue compared to other groups. While excess fat-tissue increases total body weight, it does not contribute much to REE. Therefore, REE results normalized to body weight could lead to misleading results. In order to eliminate fat-mass effect on REE of obese individuals, REE normalized to fat-free mass should be used to acquire more accurate results. On the other hand, the fat-mass increasement raises energy requirement while walking to retain the body balance. Thus, gross walking energy expenditure should be taken into consideration for the evaluating energy expenditure of walking.Keywords: body composition, obesity, resting energy expenditure, walking energy expenditure
Procedia PDF Downloads 388603 Empirical Analytical Modelling of Average Bond Stress and Anchorage of Tensile Bars in Reinforced Concrete
Authors: Maruful H. Mazumder, Raymond I. Gilbert
Abstract:
The design specifications for calculating development and lapped splice lengths of reinforcement in concrete are derived from a conventional empirical modelling approach that correlates experimental test data using a single mathematical equation. This paper describes part of a recently completed experimental research program to assess the effects of different structural parameters on the development length requirements of modern high strength steel reinforcing bars, including the case of lapped splices in large-scale reinforced concrete members. The normalized average bond stresses for the different variations of anchorage lengths are assessed according to the general form of a typical empirical analytical model of bond and anchorage. Improved analytical modelling equations are developed in the paper that better correlate the normalized bond strength parameters with the structural parameters of an empirical model of bond and anchorage.Keywords: bond stress, development length, lapped splice length, reinforced concrete
Procedia PDF Downloads 438602 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 127