Search results for: deep vibro techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8371

Search results for: deep vibro techniques

7741 Review of Dielectric Permittivity Measurement Techniques

Authors: Ahmad H. Abdelgwad, Galal E. Nadim, Tarek M. Said, Amr M. Gody

Abstract:

The prime objective of this manuscript is to provide intensive review of the techniques used for permittivity measurements. The measurement techniques, relevant for any desired application, rely on the nature of the measured dielectric material, both electrically and physically, the degree of accuracy required, and the frequency of interest. Regardless of the way that distinctive sorts of instruments can be utilized, measuring devices that provide reliable determinations of the required electrical properties including the obscure material in the frequency range of interest can be considered. The challenge in making precise dielectric property or permittivity measurements is in designing of the material specimen holder for those measurements (RF and MW frequency ranges) and adequately modeling the circuit for reliable computation of the permittivity from the electrical measurements. If the RF circuit parameters such as the impedance or admittance are estimated appropriately at a certain frequency, the material’s permittivity at this frequency can be estimated by the equations which relate the way in which the dielectric properties of the material affect on the parameters of the circuit.

Keywords: dielectric permittivity, free space measurement, waveguide techniques, coaxial probe, cavity resonator

Procedia PDF Downloads 364
7740 Strategies for Improving Teaching and Learning in Higher Institutions: Case Study of Enugu State University of Science and Technology, Nigeria

Authors: Gertrude Nkechi Okenwa

Abstract:

Higher institutions, especially the universities that are saddled with the responsibilities of teaching, learning, research, publications and social services for the production of graduates that are worthy in learning and character, and the creation of up-to-date knowledge and innovations for the total socio-economic and even political development of a given nation. Therefore, the purpose of the study was to identify the teaching, learning techniques used in the Enugu State University of Science and Technology to ensure or ascertain students’ perception on these techniques. To guide the study, survey research method was used. The population for the study was made up of second and final year students which summed up to one hundred and twenty-six students in the faculty of education. Stratified random sampling technique was adopted. A sample size of sixty (60) students was drawn for the study. The instrument used for data collection was questionnaire. To analyze the data, mean and standard deviation were used to answers the research questions. The findings revealed that direct instruction and construction techniques are used in the university. On the whole, it was observed that the students perceived constructivist techniques to be more useful and effective than direct instruction technique. Based on the findings recommendations were made to include diversification of teaching techniques among others.

Keywords: Strategies, Teaching and Learning, Constructive Technique, Direct Instructional Technique

Procedia PDF Downloads 536
7739 Next-Viz: A Literature Review and Web-Based Visualization Tool Proposal

Authors: Railly Hugo, Igor Aguilar-Alonso

Abstract:

Software visualization is a powerful tool for understanding complex software systems. However, current visualization tools often lack features or are difficult to use, limiting their effectiveness. In this paper, we present next-viz, a proposed web-based visualization tool that addresses these challenges. We provide a literature review of existing software visualization techniques and tools and describe the architecture of next-viz in detail. Our proposed tool incorporates state-of-the-art visualization techniques and is designed to be user-friendly and intuitive. We believe next-viz has the potential to advance the field of software visualization significantly.

Keywords: software visualization, literature review, tool proposal, next-viz, web-based, architecture, visualization techniques, user-friendly, intuitive

Procedia PDF Downloads 77
7738 Assisting Dating of Greek Papyri Images with Deep Learning

Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou

Abstract:

Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.

Keywords: image classification, papyri images, dating

Procedia PDF Downloads 75
7737 Enhancing Code Security with AI-Powered Vulnerability Detection

Authors: Zzibu Mark Brian

Abstract:

As software systems become increasingly complex, ensuring code security is a growing concern. Traditional vulnerability detection methods often rely on manual code reviews or static analysis tools, which can be time-consuming and prone to errors. This paper presents a distinct approach to enhancing code security by leveraging artificial intelligence (AI) and machine learning (ML) techniques. Our proposed system utilizes a combination of natural language processing (NLP) and deep learning algorithms to identify and classify vulnerabilities in real-world codebases. By analyzing vast amounts of open-source code data, our AI-powered tool learns to recognize patterns and anomalies indicative of security weaknesses. We evaluated our system on a dataset of over 10,000 open-source projects, achieving an accuracy rate of 92% in detecting known vulnerabilities. Furthermore, our tool identified previously unknown vulnerabilities in popular libraries and frameworks, demonstrating its potential for improving software security.

Keywords: AI, machine language, cord security, machine leaning

Procedia PDF Downloads 23
7736 Classification of IoT Traffic Security Attacks Using Deep Learning

Authors: Anum Ali, Kashaf ad Dooja, Asif Saleem

Abstract:

The future smart cities trend will be towards Internet of Things (IoT); IoT creates dynamic connections in a ubiquitous manner. Smart cities offer ease and flexibility for daily life matters. By using small devices that are connected to cloud servers based on IoT, network traffic between these devices is growing exponentially, whose security is a concerned issue, since ratio of cyber attack may make the network traffic vulnerable. This paper discusses the latest machine learning approaches in related work further to tackle the increasing rate of cyber attacks, machine learning algorithm is applied to IoT-based network traffic data. The proposed algorithm train itself on data and identify different sections of devices interaction by using supervised learning which is considered as a classifier related to a specific IoT device class. The simulation results clearly identify the attacks and produce fewer false detections.

Keywords: IoT, traffic security, deep learning, classification

Procedia PDF Downloads 146
7735 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria

Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko

Abstract:

The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.

Keywords: broilers, factor analysis, income, small scale

Procedia PDF Downloads 75
7734 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario

Authors: Sarita Agarwal, Deepika Delsa Dean

Abstract:

Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.

Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation

Procedia PDF Downloads 125
7733 Long-Term Conservation Tillage Impact on Soil Properties and Crop Productivity

Authors: Danute Karcauskiene, Dalia Ambrazaitiene, Regina Skuodiene, Monika Vilkiene, Regina Repsiene, Ieva Jokubauskaite

Abstract:

The main ambition for nowadays agriculture is to get the economically effective yield and to secure the soil ecological sustainability. According to the effect on the main soil quality indexes, tillage systems may be separated into two types, conventional and conservation tillage. The goal of this study was to determine the impact of conservation and conventional primary soil tillage methods and soil fertility improvement measures on soil properties and crop productivity. Methods: The soil of the experimental site is Dystric Glossic Retisol (WRB 2014) with texture of sandy loam. The trial was established in 2003 in the experimental field of crop rotation of Vėžaičiai Branch of Lithuanian Research Centre for Agriculture and Forestry. Trial factors and treatments: factor A- primary soil tillage in (autumn): deep ploughing (20-25cm), shallow ploughing (10-12cm), shallow ploughless tillage (8-10cm); factor B – soil fertility improvement measures: plant residues, plant residues + straw, green manure 1st cut + straw, farmyard manure 40tha-1 + straw. The four - course crop rotation consisted of red clover, winter wheat, spring rape and spring barley with undersown. Results: The tillage had no statistically significant effect on topsoil (0-10 cm) pHKCl level, it was 5.5 - 5.7. During all experiment period, the highest soil pHKCl level (5.65) was in the shallow ploughless tillage. The organic fertilizers particularly the biomass of grass and farmyard manure had tendency to increase the soil pHKCl. The content of plant - available phosphorus and potassium significantly increase in the shallow ploughing compared with others tillage systems. The farmyard manure increases those elements in whole arable layer. The dissolved organic carbon concentration was significantly higher in the 0 - 10 cm soil layer in the shallow ploughless tillage compared with deep ploughing. After the incorporation of clover biomass and farmyard manure the concentration of dissolved organic carbon increased in the top soil layer. During all experiment period the largest amount of water stable aggregates was determined in the soil where the shallow ploughless tillage was applied. It was by 12% higher compared with deep ploughing. During all experiment time, the soil moisture was higher in the shallow ploughing and shallow ploughless tillage (9-27%) compared to deep ploughing. The lowest emission of CO2 was determined in the deep ploughing soil. The highest rate of CO2 emission was in shallow ploughless tillage. The addition of organic fertilisers had a tendency to increase the CO2 emission, but there was no statistically significant effect between the different types of organic fertilisers. The crop yield was larger in the deep ploughing soil compared to the shallow and shallow ploughless tillage.

Keywords: reduced tillage, soil structure, soil pH, biological activity, crop productivity

Procedia PDF Downloads 262
7732 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method.

Keywords: continual learning, data reconstruction, remote sensing, hyperspectral image segmentation

Procedia PDF Downloads 253
7731 Plant Leaf Recognition Using Deep Learning

Authors: Aadhya Kaul, Gautam Manocha, Preeti Nagrath

Abstract:

Our environment comprises of a wide variety of plants that are similar to each other and sometimes the similarity between the plants makes the identification process tedious thus increasing the workload of the botanist all over the world. Now all the botanists cannot be accessible all the time for such laborious plant identification; therefore, there is an urge for a quick classification model. Also, along with the identification of the plants, it is also necessary to classify the plant as healthy or not as for a good lifestyle, humans require good food and this food comes from healthy plants. A large number of techniques have been applied to classify the plants as healthy or diseased in order to provide the solution. This paper proposes one such method known as anomaly detection using autoencoders using a set of collections of leaves. In this method, an autoencoder model is built using Keras and then the reconstruction of the original images of the leaves is done and the threshold loss is found in order to classify the plant leaves as healthy or diseased. A dataset of plant leaves is considered to judge the reconstructed performance by convolutional autoencoders and the average accuracy obtained is 71.55% for the purpose.

Keywords: convolutional autoencoder, anomaly detection, web application, FLASK

Procedia PDF Downloads 157
7730 Speed Control of DC Motor Using Optimization Techniques Based PID Controller

Authors: Santosh Kumar Suman, Vinod Kumar Giri

Abstract:

The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.

Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE

Procedia PDF Downloads 415
7729 Recovery of Fried Soybean Oil Using Bentonite as an Adsorbent: Optimization, Isotherm and Kinetics Studies

Authors: Prakash Kumar Nayak, Avinash Kumar, Uma Dash, Kalpana Rayaguru

Abstract:

Soybean oil is one of the most widely consumed cooking oils, worldwide. Deep-fat frying of foods at higher temperatures adds unique flavour, golden brown colour and crispy texture to foods. But it brings in various changes like hydrolysis, oxidation, hydrogenation and thermal alteration to oil. The presence of Peroxide value (PV) is one of the most important factors affecting the quality of the deep-fat fried oil. Using bentonite as an adsorbent, the PV can be reduced, thereby improving the quality of the soybean oil. In this study, operating parameters like heating time of oil (10, 15, 20, 25 & 30 h), contact time ( 5, 10, 15, 20, 25 h) and concentration of adsorbent (0.25, 0.5, 0.75, 1.0 and 1.25 g/ 100 ml of oil) have been optimized by response surface methodology (RSM) considering percentage reduction of PV as a response. Adsorption data were analysed by fitting with Langmuir and Freundlich isotherm model. The results show that the Langmuir model shows the best fit compared to the Freundlich model. The adsorption process was also found to follow a pseudo-second-order kinetic model.

Keywords: bentonite, Langmuir isotherm, peroxide value, RSM, soybean oil

Procedia PDF Downloads 370
7728 Studying Relationship between Local Geometry of Decision Boundary with Network Complexity for Robustness Analysis with Adversarial Perturbations

Authors: Tushar K. Routh

Abstract:

If inputs are engineered in certain manners, they can influence deep neural networks’ (DNN) performances by facilitating misclassifications, a phenomenon well-known as adversarial attacks that question networks’ vulnerability. Recent studies have unfolded the relationship between vulnerability of such networks with their complexity. In this paper, the distinctive influence of additional convolutional layers at the decision boundaries of several DNN architectures was investigated. Here, to engineer inputs from widely known image datasets like MNIST, Fashion MNIST, and Cifar 10, we have exercised One Step Spectral Attack (OSSA) and Fast Gradient Method (FGM) techniques. The aftermaths of adding layers to the robustness of the architectures have been analyzed. For reasoning, separation width from linear class partitions and local geometry (curvature) near the decision boundary have been examined. The result reveals that model complexity has significant roles in adjusting relative distances from margins, as well as the local features of decision boundaries, which impact robustness.

Keywords: DNN robustness, decision boundary, local curvature, network complexity

Procedia PDF Downloads 69
7727 Distribution and Segregation of Aerosols in Ambient Air

Authors: S. Ramteke, K. S. Patel

Abstract:

Aerosols are complex mixture of particulate matters (PM) inclusive of carbons, silica, elements, various salts, etc. Aerosols get deep into the human lungs and cause a broad range of health effects, in particular, respiratory and cardiovascular illnesses. They are one of the major culprits for the climate change. They are emitted by the high thermal processes i.e. vehicles, steel, sponge, cement, thermal power plants, etc. Raipur (22˚33'N to 21˚14'N and 82˚6'E) to 81˚38'E) is a growing industrial city in central India with population of two million. In this work, the distribution of inorganics (i.e. Cl⁻, NO³⁻, SO₄²⁻, NH₄⁺, Na⁺, K⁺, Mg²⁺, Ca²⁺, Al, Cr, Mn, Fe, Ni, Cu, Zn, and Pb) associated to the PM in the ambient air is described. The PM₁₀ in ambient air of Raipur city was collected for duration of one year (December 2014 - December 2015). The PM₁₀ was segregated into nine modes i.e. PM₁₀.₀₋₉.₀, PM₉.₀₋₅.₈, PM₅.₈₋₄.₇, PM₄.₇₋₃.₃, PM₃.₃₋₂.₁, PM₂.₁₋₁.₁, PM₁.₁₋₀.₇, PM₀.₇₋₀.₄ and PM₀.₄ to know their emission sources and health hazards. The analysis of ions and metals was carried out by techniques i.e. ion chromatography and TXRF. The PM₁₀ concentration (n=48) was ranged from 100-450 µg/m³ with mean value of 73.57±20.82 µg/m³. The highest concentration of PM₄.₇₋₃.₃, PM₂.₁₋₁.₁, PM₁.₁₋₀.₇ was observed in the commercial, residential and industrial area, respectively. The effect of meteorology i.e. temperature, humidity, wind speed and wind direction in the PM₁₀ and associated elemental concentration in the air is discussed.

Keywords: ambient aerosol, ions, metals, segregation

Procedia PDF Downloads 196
7726 Vector-Based Analysis in Cognitive Linguistics

Authors: Chuluundorj Begz

Abstract:

This paper presents the dynamic, psycho-cognitive approach to study of human verbal thinking on the basis of typologically different languages /as a Mongolian, English and Russian/. Topological equivalence in verbal communication serves as a basis of Universality of mental structures and therefore deep structures. Mechanism of verbal thinking consisted at the deep level of basic concepts, rules for integration and classification, neural networks of vocabulary. In neuro cognitive study of language, neural architecture and neuro psychological mechanism of verbal cognition are basis of a vector-based modeling. Verbal perception and interpretation of the infinite set of meanings and propositions in mental continuum can be modeled by applying tensor methods. Euclidean and non-Euclidean spaces are applied for a description of human semantic vocabulary and high order structures.

Keywords: Euclidean spaces, isomorphism and homomorphism, mental lexicon, mental mapping, semantic memory, verbal cognition, vector space

Procedia PDF Downloads 517
7725 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 74
7724 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 133
7723 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 187
7722 Vibration Transmission across Junctions of Walls and Floors in an Apartment Building: An Experimental Investigation

Authors: Hugo Sampaio Libero, Max de Castro Magalhaes

Abstract:

The perception of sound radiated from a building floor is greatly influenced by the rooms in which it is immersed and by the position of both listener and source. The main question that remains unanswered is related to the influence of the source position on the sound power radiated by a complex wall-floor system in buildings. This research is concerned with the investigation of vibration transmission across walls and floors in buildings. It is primarily based on the determination of vibration reduction index via experimental tests. Knowledge of this parameter may help in predicting noise and vibration propagation in building components. First, the physical mechanisms involving vibration transmission across structural junctions are described. An experimental setup is performed to aid this investigation. The experimental tests have shown that the vibration generation in the walls and floors is directed related to their size and boundary conditions. It is also shown that the vibration source position can affect the overall vibration spectrum significantly. Second, the characteristics of the noise spectra inside the rooms due to an impact source (tapping machine) are also presented. Conclusions are drawn for the general trend of vibration and noise spectrum of the structural components and rooms, respectively. In summary, the aim of this paper is to investigate the vibro-acoustical behavior of building floors and walls under floor impact excitation. The impact excitation was at distinct positions on the slab. The analysis has highlighted the main physical characteristics of the vibration transmission mechanism.

Keywords: vibration transmission, vibration reduction index, impact excitation, experimental tests

Procedia PDF Downloads 89
7721 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System

Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh

Abstract:

This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.

Keywords: deep learning, hand-eye coordination, reaction, hand dexterity

Procedia PDF Downloads 61
7720 Second Order Optimality Conditions in Nonsmooth Analysis on Riemannian Manifolds

Authors: Seyedehsomayeh Hosseini

Abstract:

Much attention has been paid over centuries to understanding and solving the problem of minimization of functions. Compared to linear programming and nonlinear unconstrained optimization problems, nonlinear constrained optimization problems are much more difficult. Since the procedure of finding an optimizer is a search based on the local information of the constraints and the objective function, it is very important to develop techniques using geometric properties of the constraints and the objective function. In fact, differential geometry provides a powerful tool to characterize and analyze these geometric properties. Thus, there is clearly a link between the techniques of optimization on manifolds and standard constrained optimization approaches. Furthermore, there are manifolds that are not defined as constrained sets in R^n an important example is the Grassmann manifolds. Hence, to solve optimization problems on these spaces, intrinsic methods are used. In a nondifferentiable problem, the gradient information of the objective function generally cannot be used to determine the direction in which the function is decreasing. Therefore, techniques of nonsmooth analysis are needed to deal with such a problem. As a manifold, in general, does not have a linear structure, the usual techniques, which are often used in nonsmooth analysis on linear spaces, cannot be applied and new techniques need to be developed. This paper presents necessary and sufficient conditions for a strict local minimum of extended real-valued, nonsmooth functions defined on Riemannian manifolds.

Keywords: Riemannian manifolds, nonsmooth optimization, lower semicontinuous functions, subdifferential

Procedia PDF Downloads 356
7719 Effect of Relaxation Techniques in Reducing Stress Level among Mothers of Children with Autism Spectrum Disorder

Authors: R. N. Jay A. Ablog, M. N. Dyanne R. Del Carmen, Roma Rose A. Dela Cruz, Joselle Dara M. Estrada, Luke Clifferson M. Gagarin, Florence T. Lang-ay, Ma. Dayanara O. Mariñas, Maria Christina S. Nepa, Jahraine Chyle B. Ocampo, Mark Reynie Renz V. Silva, Jenny Lyn L. Soriano, Loreal Cloe M. Suva, Jackelyn R. Torres

Abstract:

Background: To date, there is dearth of literature as to the effect of relaxation techniques in lowering the stress level of mothers of children with autism spectrum disorder (ASD). Aim: To investigate the effectiveness of 4-week relaxation techniques in stress level reduction of mothers of children with ASD. Methods: Quasi experimental design. It included 25 mothers (10-experimental, 15-control) who were chosen via purposive sampling. The mothers were recruited in the different SPED centers in Baguio City and La Trinidad and in the community. Statistics used were T-test and Related T-Test. Results: The overall weighted mean score after 4-week training is 2.3, indicating that the relaxation techniques introduced were moderately effective in lowering stress level. Statistical analysis (T-test; CV=4.51>TV=2.26) shown a significant difference in the stress level reduction of mothers in the experimental group pre and post interventions. There is also a significant difference in the stress level reduction in the control and the experimental group (Related T-test; CV=2.08 >TV=2.07). The relaxation techniques introduced were favorable, cost-effective, and easy to perform interventions to decrease stress level.

Keywords: relaxation techniques, mindful eating, progressive muscle relaxation, breathing exercise, autism spectrum disorder

Procedia PDF Downloads 428
7718 A Bibliometric Analysis: An Integrative Systematic Review through the Paths of Vitiviniculture

Authors: Patricia Helena Dos Santos Martins, Mateus Atique, Lucas Oliveira Gomes Ferreira

Abstract:

There is a growing body of literature that recognizes the importance of bibliometric analysis through the evolutionary nuances of a specific field while shedding light on the emerging areas in that field. Surprisingly, its application in the manufacturing research of vitiviniculture is relatively new and, in many instances, underdeveloped. The aim of this study is to present an overview of the bibliometric methodology, with a particular focus on the Meta-Analytical Approach Theory model – TEMAC, while offering step-by-step results on the available techniques and procedures for carrying out studies about the elements associated with vitiviniculture. Where TEMAC is a method that uses metadata to generate heat maps, graphs of keyword relationships and others, with the aim of revealing relationships between authors, articles and mainly to understand how the topic has evolved over the period study and thus reveal which subthemes were worked on, main techniques and applications, helping to understand that topic under study and guide researchers in generating new research. From the studies carried out using TEMAC, it is possible to raise which are the techniques within the statistical control of processes that are most used within the wine industry and thus assist professionals in the area in the application of the best techniques. It is expected that this paper will be a useful resource for gaining insights into the available techniques and procedures for carrying out studies about vitiviniculture, the cultivation of vineyards, the production of wine, and all the ethnography connected with it.

Keywords: TEMAC, vitiviniculture, statical control of process, quality

Procedia PDF Downloads 109
7717 The Accuracy of Parkinson's Disease Diagnosis Using [123I]-FP-CIT Brain SPECT Data with Machine Learning Techniques: A Survey

Authors: Lavanya Madhuri Bollipo, K. V. Kadambari

Abstract:

Objective: To discuss key issues in the diagnosis of Parkinson disease (PD), To discuss features influencing PD progression, To discuss importance of brain SPECT data in PD diagnosis, and To discuss the essentiality of machine learning techniques in early diagnosis of PD. An accurate and early diagnosis of PD is nowadays a challenge as clinical symptoms in PD arise only when there is more than 60% loss of dopaminergic neurons. So far there are no laboratory tests for the diagnosis of PD, causing a high rate of misdiagnosis especially when the disease is in the early stages. Recent neuroimaging studies with brain SPECT using 123I-Ioflupane (DaTSCAN) as radiotracer shown to be widely used to assist the diagnosis of PD even in its early stages. Machine learning techniques can be used in combination with image analysis procedures to develop computer-aided diagnosis (CAD) systems for PD. This paper addressed recent studies involving diagnosis of PD in its early stages using brain SPECT data with Machine Learning Techniques.

Keywords: Parkinson disease (PD), dopamine transporter, single-photon emission computed tomography (SPECT), support vector machine (SVM)

Procedia PDF Downloads 390
7716 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 132
7715 Adjustment and Compensation Techniques for the Rotary Axes of Five-axis CNC Machine Tools

Authors: Tung-Hui Hsu, Wen-Yuh Jywe

Abstract:

Five-axis computer numerical control (CNC) machine tools (three linear and two rotary axes) are ideally suited to the fabrication of complex work pieces, such as dies, turbo blades, and cams. The locations of the axis average line and centerline of the rotary axes strongly influence the performance of these machines; however, techniques to compensate for eccentric error in the rotary axes remain weak. This paper proposes optical (Non-Bar) techniques capable of calibrating five-axis CNC machine tools and compensating for eccentric error in the rotary axes. This approach employs the measurement path in ISO/CD 10791-6 to determine the eccentric error in two rotary axes, for which compensatory measures can be implemented. Experimental results demonstrate that the proposed techniques can improve the performance of various five-axis CNC machine tools by more than 90%. Finally, a result of the cutting test using a B-type five-axis CNC machine tool confirmed to the usefulness of this proposed compensation technique.

Keywords: calibration, compensation, rotary axis, five-axis computer numerical control (CNC) machine tools, eccentric error, optical calibration system, ISO/CD 10791-6

Procedia PDF Downloads 375
7714 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 130
7713 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 166
7712 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images

Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam

Abstract:

Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.

Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification

Procedia PDF Downloads 345