Search results for: deep vibro techniques
7846 Distribution and Segregation of Aerosols in Ambient Air
Authors: S. Ramteke, K. S. Patel
Abstract:
Aerosols are complex mixture of particulate matters (PM) inclusive of carbons, silica, elements, various salts, etc. Aerosols get deep into the human lungs and cause a broad range of health effects, in particular, respiratory and cardiovascular illnesses. They are one of the major culprits for the climate change. They are emitted by the high thermal processes i.e. vehicles, steel, sponge, cement, thermal power plants, etc. Raipur (22˚33'N to 21˚14'N and 82˚6'E) to 81˚38'E) is a growing industrial city in central India with population of two million. In this work, the distribution of inorganics (i.e. Cl⁻, NO³⁻, SO₄²⁻, NH₄⁺, Na⁺, K⁺, Mg²⁺, Ca²⁺, Al, Cr, Mn, Fe, Ni, Cu, Zn, and Pb) associated to the PM in the ambient air is described. The PM₁₀ in ambient air of Raipur city was collected for duration of one year (December 2014 - December 2015). The PM₁₀ was segregated into nine modes i.e. PM₁₀.₀₋₉.₀, PM₉.₀₋₅.₈, PM₅.₈₋₄.₇, PM₄.₇₋₃.₃, PM₃.₃₋₂.₁, PM₂.₁₋₁.₁, PM₁.₁₋₀.₇, PM₀.₇₋₀.₄ and PM₀.₄ to know their emission sources and health hazards. The analysis of ions and metals was carried out by techniques i.e. ion chromatography and TXRF. The PM₁₀ concentration (n=48) was ranged from 100-450 µg/m³ with mean value of 73.57±20.82 µg/m³. The highest concentration of PM₄.₇₋₃.₃, PM₂.₁₋₁.₁, PM₁.₁₋₀.₇ was observed in the commercial, residential and industrial area, respectively. The effect of meteorology i.e. temperature, humidity, wind speed and wind direction in the PM₁₀ and associated elemental concentration in the air is discussed.Keywords: ambient aerosol, ions, metals, segregation
Procedia PDF Downloads 2007845 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 1367844 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System
Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh
Abstract:
This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.Keywords: deep learning, hand-eye coordination, reaction, hand dexterity
Procedia PDF Downloads 667843 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1907842 Speed Control of DC Motor Using Optimization Techniques Based PID Controller
Authors: Santosh Kumar Suman, Vinod Kumar Giri
Abstract:
The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE
Procedia PDF Downloads 4207841 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 1367840 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 1347839 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach
Authors: Gong Zhilin, Jing Yang, Jian Yin
Abstract:
The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).Keywords: credit card, data mining, fraud detection, money transactions
Procedia PDF Downloads 1317838 Second Order Optimality Conditions in Nonsmooth Analysis on Riemannian Manifolds
Authors: Seyedehsomayeh Hosseini
Abstract:
Much attention has been paid over centuries to understanding and solving the problem of minimization of functions. Compared to linear programming and nonlinear unconstrained optimization problems, nonlinear constrained optimization problems are much more difficult. Since the procedure of finding an optimizer is a search based on the local information of the constraints and the objective function, it is very important to develop techniques using geometric properties of the constraints and the objective function. In fact, differential geometry provides a powerful tool to characterize and analyze these geometric properties. Thus, there is clearly a link between the techniques of optimization on manifolds and standard constrained optimization approaches. Furthermore, there are manifolds that are not defined as constrained sets in R^n an important example is the Grassmann manifolds. Hence, to solve optimization problems on these spaces, intrinsic methods are used. In a nondifferentiable problem, the gradient information of the objective function generally cannot be used to determine the direction in which the function is decreasing. Therefore, techniques of nonsmooth analysis are needed to deal with such a problem. As a manifold, in general, does not have a linear structure, the usual techniques, which are often used in nonsmooth analysis on linear spaces, cannot be applied and new techniques need to be developed. This paper presents necessary and sufficient conditions for a strict local minimum of extended real-valued, nonsmooth functions defined on Riemannian manifolds.Keywords: Riemannian manifolds, nonsmooth optimization, lower semicontinuous functions, subdifferential
Procedia PDF Downloads 3617837 Mechanical Properties of D2 Tool Steel Cryogenically Treated Using Controllable Cooling
Authors: A. Rabin, G. Mazor, I. Ladizhenski, R. Shneck, Z.
Abstract:
The hardness and hardenability of AISI D2 cold work tool steel with conventional quenching (CQ), deep cryogenic quenching (DCQ) and rapid deep cryogenic quenching heat treatments caused by temporary porous coating based on magnesium sulfate was investigated. Each of the cooling processes was examined from the perspective of the full process efficiency, heat flux in the austenite-martensite transformation range followed by characterization of the temporary porous layer made of magnesium sulfate using confocal laser scanning microscopy (CLSM), surface and core hardness and hardenability using Vickr’s hardness technique. The results show that the cooling rate (CR) at the austenite-martensite transformation range have a high influence on the hardness of the studied steel.Keywords: AISI D2, controllable cooling, magnesium sulfate coating, rapid cryogenic heat treatment, temporary porous layer
Procedia PDF Downloads 1377836 Effect of Relaxation Techniques in Reducing Stress Level among Mothers of Children with Autism Spectrum Disorder
Authors: R. N. Jay A. Ablog, M. N. Dyanne R. Del Carmen, Roma Rose A. Dela Cruz, Joselle Dara M. Estrada, Luke Clifferson M. Gagarin, Florence T. Lang-ay, Ma. Dayanara O. Mariñas, Maria Christina S. Nepa, Jahraine Chyle B. Ocampo, Mark Reynie Renz V. Silva, Jenny Lyn L. Soriano, Loreal Cloe M. Suva, Jackelyn R. Torres
Abstract:
Background: To date, there is dearth of literature as to the effect of relaxation techniques in lowering the stress level of mothers of children with autism spectrum disorder (ASD). Aim: To investigate the effectiveness of 4-week relaxation techniques in stress level reduction of mothers of children with ASD. Methods: Quasi experimental design. It included 25 mothers (10-experimental, 15-control) who were chosen via purposive sampling. The mothers were recruited in the different SPED centers in Baguio City and La Trinidad and in the community. Statistics used were T-test and Related T-Test. Results: The overall weighted mean score after 4-week training is 2.3, indicating that the relaxation techniques introduced were moderately effective in lowering stress level. Statistical analysis (T-test; CV=4.51>TV=2.26) shown a significant difference in the stress level reduction of mothers in the experimental group pre and post interventions. There is also a significant difference in the stress level reduction in the control and the experimental group (Related T-test; CV=2.08 >TV=2.07). The relaxation techniques introduced were favorable, cost-effective, and easy to perform interventions to decrease stress level.Keywords: relaxation techniques, mindful eating, progressive muscle relaxation, breathing exercise, autism spectrum disorder
Procedia PDF Downloads 4337835 A Bibliometric Analysis: An Integrative Systematic Review through the Paths of Vitiviniculture
Authors: Patricia Helena Dos Santos Martins, Mateus Atique, Lucas Oliveira Gomes Ferreira
Abstract:
There is a growing body of literature that recognizes the importance of bibliometric analysis through the evolutionary nuances of a specific field while shedding light on the emerging areas in that field. Surprisingly, its application in the manufacturing research of vitiviniculture is relatively new and, in many instances, underdeveloped. The aim of this study is to present an overview of the bibliometric methodology, with a particular focus on the Meta-Analytical Approach Theory model – TEMAC, while offering step-by-step results on the available techniques and procedures for carrying out studies about the elements associated with vitiviniculture. Where TEMAC is a method that uses metadata to generate heat maps, graphs of keyword relationships and others, with the aim of revealing relationships between authors, articles and mainly to understand how the topic has evolved over the period study and thus reveal which subthemes were worked on, main techniques and applications, helping to understand that topic under study and guide researchers in generating new research. From the studies carried out using TEMAC, it is possible to raise which are the techniques within the statistical control of processes that are most used within the wine industry and thus assist professionals in the area in the application of the best techniques. It is expected that this paper will be a useful resource for gaining insights into the available techniques and procedures for carrying out studies about vitiviniculture, the cultivation of vineyards, the production of wine, and all the ethnography connected with it.Keywords: TEMAC, vitiviniculture, statical control of process, quality
Procedia PDF Downloads 1227834 The Accuracy of Parkinson's Disease Diagnosis Using [123I]-FP-CIT Brain SPECT Data with Machine Learning Techniques: A Survey
Authors: Lavanya Madhuri Bollipo, K. V. Kadambari
Abstract:
Objective: To discuss key issues in the diagnosis of Parkinson disease (PD), To discuss features influencing PD progression, To discuss importance of brain SPECT data in PD diagnosis, and To discuss the essentiality of machine learning techniques in early diagnosis of PD. An accurate and early diagnosis of PD is nowadays a challenge as clinical symptoms in PD arise only when there is more than 60% loss of dopaminergic neurons. So far there are no laboratory tests for the diagnosis of PD, causing a high rate of misdiagnosis especially when the disease is in the early stages. Recent neuroimaging studies with brain SPECT using 123I-Ioflupane (DaTSCAN) as radiotracer shown to be widely used to assist the diagnosis of PD even in its early stages. Machine learning techniques can be used in combination with image analysis procedures to develop computer-aided diagnosis (CAD) systems for PD. This paper addressed recent studies involving diagnosis of PD in its early stages using brain SPECT data with Machine Learning Techniques.Keywords: Parkinson disease (PD), dopamine transporter, single-photon emission computed tomography (SPECT), support vector machine (SVM)
Procedia PDF Downloads 3997833 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1707832 Lower Limb Oedema in Beckwith-Wiedemann Syndrome
Authors: Mihai-Ionut Firescu, Mark A. P. Carson
Abstract:
We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism
Procedia PDF Downloads 2357831 Adjustment and Compensation Techniques for the Rotary Axes of Five-axis CNC Machine Tools
Authors: Tung-Hui Hsu, Wen-Yuh Jywe
Abstract:
Five-axis computer numerical control (CNC) machine tools (three linear and two rotary axes) are ideally suited to the fabrication of complex work pieces, such as dies, turbo blades, and cams. The locations of the axis average line and centerline of the rotary axes strongly influence the performance of these machines; however, techniques to compensate for eccentric error in the rotary axes remain weak. This paper proposes optical (Non-Bar) techniques capable of calibrating five-axis CNC machine tools and compensating for eccentric error in the rotary axes. This approach employs the measurement path in ISO/CD 10791-6 to determine the eccentric error in two rotary axes, for which compensatory measures can be implemented. Experimental results demonstrate that the proposed techniques can improve the performance of various five-axis CNC machine tools by more than 90%. Finally, a result of the cutting test using a B-type five-axis CNC machine tool confirmed to the usefulness of this proposed compensation technique.Keywords: calibration, compensation, rotary axis, five-axis computer numerical control (CNC) machine tools, eccentric error, optical calibration system, ISO/CD 10791-6
Procedia PDF Downloads 3837830 ROOP: Translating Sequential Code Fragments to Distributed Code Fragments Using Deep Reinforcement Learning
Authors: Arun Sanjel, Greg Speegle
Abstract:
Every second, massive amounts of data are generated, and Data Intensive Scalable Computing (DISC) frameworks have evolved into effective tools for analyzing such massive amounts of data. Since the underlying architecture of these distributed computing platforms is often new to users, building a DISC application can often be time-consuming and prone to errors. The automated conversion of a sequential program to a DISC program will consequently significantly improve productivity. However, synthesizing a user’s intended program from an input specification is complex, with several important applications, such as distributed program synthesizing and code refactoring. Existing works such as Tyro and Casper rely entirely on deductive synthesis techniques or similar program synthesis approaches. Our approach is to develop a data-driven synthesis technique to identify sequential components and translate them to equivalent distributed operations. We emphasize using reinforcement learning and unit testing as feedback mechanisms to achieve our objectives.Keywords: program synthesis, distributed computing, reinforcement learning, unit testing, DISC
Procedia PDF Downloads 1077829 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images
Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam
Abstract:
Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification
Procedia PDF Downloads 3477828 Passive Solar Water Concepts for Human Comfort
Authors: Eyibo Ebengeobong Eddie
Abstract:
Taking advantage of the sun's position to design buildings to ensure human comfort has always been an important aspect in an architectural design. Using cheap and less expensive methods and systems for gaining solar energy, heating and cooling has always been a great advantage to users and occupants of a building. As the years run by, daily techniques and methods have been created and more are being discovered to help reduce the energy demands of any building. Architects have made effective use of a buildings orientation, building materials and elements to achieve less energy demand. This paper talks about the various techniques used in solar heating and passive cooling of buildings and through water techniques and concepts to achieve thermal comfort.Keywords: comfort, passive, solar, water
Procedia PDF Downloads 4607827 A Comprehensive Study and Evaluation on Image Fashion Features Extraction
Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen
Abstract:
Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.Keywords: convolutional neural network, feature representation, image processing, machine modelling
Procedia PDF Downloads 1397826 Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification
Authors: Bharatendra Rai
Abstract:
The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning.Keywords: long short-term memory networks, convolutional recurrent networks, text classification, hyperparameter tuning, Tukey honest significant differences
Procedia PDF Downloads 1297825 A Theoretical Model for Pattern Extraction in Large Datasets
Authors: Muhammad Usman
Abstract:
Pattern extraction has been done in past to extract hidden and interesting patterns from large datasets. Recently, advancements are being made in these techniques by providing the ability of multi-level mining, effective dimension reduction, advanced evaluation and visualization support. This paper focuses on reviewing the current techniques in literature on the basis of these parameters. Literature review suggests that most of the techniques which provide multi-level mining and dimension reduction, do not handle mixed-type data during the process. Patterns are not extracted using advanced algorithms for large datasets. Moreover, the evaluation of patterns is not done using advanced measures which are suited for high-dimensional data. Techniques which provide visualization support are unable to handle a large number of rules in a small space. We present a theoretical model to handle these issues. The implementation of the model is beyond the scope of this paper.Keywords: association rule mining, data mining, data warehouses, visualization of association rules
Procedia PDF Downloads 2237824 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan
Authors: Yichin Chen
Abstract:
Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.Keywords: aerial photogrammetry, landslide, landform change, Taiwan
Procedia PDF Downloads 1577823 Lean Construction Techniques in Construction Projects of Pakistan
Authors: Aftab Hameed Memon, Shadab Noor, Muhammad Akram Akhund
Abstract:
Lean construction is a philosophy adopted in the construction industry to increase the value of a project by reducing waste and improving construction productivity. Lean emphasizes on maximizing the value of a project with less expenditure. Globally, lean philosophy has received wider popularity in construction sector. Lean construction has supported the practitioners with several tools and techniques to implement at various stages of a construction project. Following the global trends, this study has investigated the lean practice in Pakistan. The level of implementation of different lean tools and techniques altogether with potential benefits experienced by its implementation in construction projects of Pakistan is analyzed. To achieve the targets, the opinion was sought by the practitioners involved in handling construction projects representing four stakeholders that are a client, consultant, contractors and material suppliers through a structured questionnaire. A total of 34 completed questionnaires were collected and then statistically analyzed. The findings of the analysis have highlighted that pull approach, work standardization, just in time, increase visualization tools, integrated project delivery method and fail-safe for quality are common lean techniques implemented in the local construction industry. While reduction in waste, client’s satisfaction, improved communication, visual control and proper task management are major benefits of the lean construction application.Keywords: lean construction, lean tools and techniques, lean benefits, waste reduction, Pakistan
Procedia PDF Downloads 2877822 Colorectal Resection in Endometriosis: A Study on Conservative Vascular Approach
Authors: A. Zecchin, E. Vallicella, I. Alberi, A. Dalle Carbonare, A. Festi, F. Galeone, S. Garzon, R. Raffaelli, P. Pomini, M. Franchi
Abstract:
Introduction: Severe endometriosis is a multiorgan disease, that involves bowel in 31% of cases. Disabling symptoms and deep infiltration can lead to bowel obstruction: surgical bowel treatment may be needed. In these cases, colorectal segment resection is usually performed by inferior mesenteric artery ligature, as radically as for oncological surgery. This study was made on surgery based on intestinal vascular axis’ preservation. It was assessed postoperative complications risks (mainly rate of dehiscence of intestinal anastomoses), and results were compared with the ones found in literature about classical colorectal resection. Materials and methods: This was a retrospective study based on 62 patients with deep infiltrating endometriosis of the bowel, which undergo segmental resection with intestinal vascular axis preservation, between 2013 and 2016. It was assessed complications related to the intervention both during hospitalization and 30-60 days after resection. Particular attention was paid to the presence of anastomotic dehiscence. 52 patients were finally telephonically interviewed in order to investigate the presence or absence of intestinal constipation. Results and Conclusion: Segmental intestinal resection performed in this study ensured a more conservative vascular approach, with lower rate of anastomotic dehiscence (1.6%) compared to classical literature data (10.0% to 11.4% ). No complications were observed regarding spontaneous recovery of intestinal motility and bladder emptying. Constipation in some patients, even after years of intervention, is not assessable in the absence of a preoperative constipation state assessment.Keywords: anastomotic dehiscence, deep infiltrating endometriosis, colorectal resection, vascular axis preservation
Procedia PDF Downloads 2047821 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 1437820 Artificial Intelligence for Cloud Computing
Authors: Sandesh Achar
Abstract:
Artificial intelligence is being increasingly incorporated into many applications across various sectors such as health, education, security, and agriculture. Recently, there has been rapid development in cloud computing technology, resulting in AI’s implementation into cloud computing to enhance and optimize the technology service rendered. The deployment of AI in cloud-based applications has brought about autonomous computing, whereby systems achieve stated results without human intervention. Despite the amount of research into autonomous computing, work incorporating AI/ML into cloud computing to enhance its performance and resource allocation remain a fundamental challenge. This paper highlights different manifestations, roles, trends, and challenges related to AI-based cloud computing models. This work reviews and highlights excellent investigations and progress in the domain. Future directions are suggested for leveraging AI/ML in next-generation computing for emerging computing paradigms such as cloud environments. Adopting AI-based algorithms and techniques to increase operational efficiency, cost savings, automation, reducing energy consumption and solving complex cloud computing issues are the major findings outlined in this paper.Keywords: artificial intelligence, cloud computing, deep learning, machine learning, internet of things
Procedia PDF Downloads 1097819 Construction of Large Scale UAVs Using Homebuilt Composite Techniques
Authors: Brian J. Kozak, Joshua D. Shipman, Peng Hao Wang, Blake Shipp
Abstract:
The unmanned aerial system (UAS) industry is growing at a rapid pace. This growth has increased the demand for low cost, custom made and high strength unmanned aerial vehicles (UAV). The area of most growth is in the area of 25 kg to 200 kg vehicles. Vehicles this size are beyond the size and scope of simple wood and fabric designs commonly found in hobbyist aircraft. These high end vehicles require stronger materials to complete their mission. Traditional aircraft construction materials such as aluminum are difficult to use without machining or advanced computer controlled tooling. However, by using general aviation composite aircraft homebuilding techniques and materials, a large scale UAV can be constructed cheaply and easily. Furthermore, these techniques could be used to easily manufacture cost made composite shapes and airfoils that would be cost prohibitive when using metals. These homebuilt aircraft techniques are being demonstrated by the researchers in the construction of a 75 kg aircraft.Keywords: composite aircraft, homebuilding, unmanned aerial system industry, UAS, unmanned aerial vehicles, UAV
Procedia PDF Downloads 1387818 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction
Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand
Abstract:
Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids
Procedia PDF Downloads 1797817 The Application of FSI Techniques in Modeling of Realist Pulmonary Systems
Authors: Abdurrahim Bolukbasi, Hassan Athari, Dogan Ciloglu
Abstract:
The modeling lung respiratory system which has complex anatomy and biophysics presents several challenges including tissue-driven flow patterns and wall motion. Also, the lung pulmonary system because of that they stretch and recoil with each breath, has not static walls and structures. The direct relationship between air flow and tissue motion in the lung structures naturally prefers an FSI simulation technique. Therefore, in order to toward the realistic simulation of pulmonary breathing mechanics the development of a coupled FSI computational model is an important step. A simple but physiologically-relevant three dimensional deep long geometry is designed and fluid-structure interaction (FSI) coupling technique is utilized for simulating the deformation of the lung parenchyma tissue which produces airflow fields. The real understanding of respiratory tissue system as a complex phenomenon have been investigated with respect to respiratory patterns, fluid dynamics and tissue visco-elasticity and tidal breathing period. Procedia PDF Downloads 323