Search results for: image processing techniques
10904 Determinaton of Processing Parameters of Decaffeinated Black Tea by Using Pilot-Scale Supercritical CO₂ Extraction
Authors: Saziye Ilgaz, Atilla Polat
Abstract:
There is a need for development of new processing techniques to ensure safety and quality of final product while minimizing the adverse impact of extraction solvents on environment and residue levels of these solvents in final product, decaffeinated black tea. In this study pilot scale supercritical carbon dioxide (SCCO₂) extraction was used to produce decaffeinated black tea in place of solvent extraction. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO₂ flow rate (1, 2 ,3 LPM) and co-solvent quantity (0, 2.5, 5 %mol) were selected as extraction parameters. The five factors BoxBehnken experimental design with three center points was performed to generate 46 different processing conditions for caffeine removal from black tea samples. As a result of these 46 experiments caffeine content of black tea samples were reduced from 2.16 % to 0 – 1.81 %. The experiments showed that extraction time, pressure, CO₂ flow rate and co-solvent quantity had great impact on decaffeination yield. Response surface methodology (RSM) was used to optimize the parameters of the supercritical carbon dioxide extraction. Optimum extraction parameters obtained of decaffeinated black tea were as follows: extraction temperature of 62,5 °C, extraction pressure of 375 bar, CO₂ flow rate of 3 LPM, extraction time of 176.5 min and co-solvent quantity of 5 %mol.Keywords: supercritical carbon dioxide, decaffeination, black tea, extraction
Procedia PDF Downloads 36410903 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 24810902 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.Keywords: EEG scanner, eye-detector, mammography, observers
Procedia PDF Downloads 21510901 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents
Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei
Abstract:
With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.Keywords: document processing, framework, formal definition, machine learning
Procedia PDF Downloads 21710900 Condition Based Assessment of Power Transformer with Modern Techniques
Authors: Piush Verma, Y. R. Sood
Abstract:
This paper provides the information on the diagnostics techniques for condition monitoring of power transformer (PT). This paper deals with the practical importance of the transformer diagnostic in the Electrical Engineering field. The life of the transformer depends upon its insulation i.e paper and oil. The major testing techniques applies on transformer oil and paper i.e dissolved gas analysis, furfural analysis, radio interface, acoustic emission, infra-red emission, frequency response analysis, power factor, polarization spectrum, magnetizing currents, turn and winding ratio. A review has been made on the modern development of this practical technology.Keywords: temperature, condition monitoring, diagnostics methods, paper analysis techniques, oil analysis techniques
Procedia PDF Downloads 43310899 Difference Expansion Based Reversible Data Hiding Scheme Using Edge Directions
Authors: Toshanlal Meenpal, Ankita Meenpal
Abstract:
A very important technique in reversible data hiding field is Difference expansion. Secret message as well as the cover image may be completely recovered without any distortion after data extraction process due to reversibility feature. In general, in any difference expansion scheme embedding is performed by integer transform in the difference image acquired by grouping two neighboring pixel values. This paper proposes an improved reversible difference expansion embedding scheme. We mainly consider edge direction for embedding by modifying the difference of two neighboring pixels values. In general, the larger difference tends to bring a degraded stego image quality than the smaller difference. Image quality in the range of 0.5 to 3.7 dB in average is achieved by the proposed scheme, which is shown through the experimental results. However payload wise it achieves almost similar capacity in comparisons with previous method.Keywords: information hiding, wedge direction, difference expansion, integer transform
Procedia PDF Downloads 48410898 Activities of Processors in Domestication/Conservation and Processing of Oil Bean (Pentaclethra macrophylla) in Enugu State, South East Nigeria
Authors: Iwuchukwu J. C., Mbah C.
Abstract:
There seems to be dearth on information on how oil bean is being exploited, processed and conserved locally. This gap stifles initiatives on the evaluation of the suitability of the methods used and the invention of new and better methods. The study; therefore, assesses activities of processors in domestication/conservation and processing of oil bean (Pentaclethra macrophylla) Enugu State, South East Nigeria. Three agricultural zones, three blocks, nine circles and seventy-two respondents that were purposively selected made up the sample for the study. Data were presented in percentage, chart and mean score. The result shows that processors of oil bean in the area were middle-aged, married with relatively large household size and long years of experience in processing. They sourced oil bean they processed from people’s farmland and sourced information on processing of oil bean from friends and relatives. Activities involved in processing of oil bean were boiling, dehulling, washing, sieving, slicing, wrapping. However, the sequence of these activities varies among these processors. Little or nothing was done by the processors towards the conservation of the crop while poor storage and processing facilities and lack of knowledge on modern preservation technique were major constraints to processing of oil bean in the area. The study concluded that efforts should be made by governments and processors through cooperative group in provision of processing and storage facility for oil bean while research institute should conserve and generate improved specie of the crop to arouse interest of the farmers and processors on the crop which will invariably increase productivity.Keywords: conservation, domestication, oil bean, processing
Procedia PDF Downloads 30810897 Associations Between Positive Body Image, Physical Activity and Dietary Habits in Young Adults
Authors: Samrah Saeed
Abstract:
Introduction: This study considers a measure of positive body image and the associations between body appreciation, beauty ideals internalization, dietary habits, and physical activity in young adults. Positive body image is assessed by Body Appreciation Scale 2. It is used to assess a person's acceptance of the body, the degree of positivity, and respect for the body.Regular physical activity and healthy eating arebasically important for the body, and they play an important role in creating a positive image of the body. Objectives: To identify the associations between body appreciation and beauty ideals internalization. To compare body appreciation and body ideals internalization among students of different physical activity. To explore the associations between dietary habits (unhealthy, healthy), body appreciation and body ideals internalization. Research methods and organization: Study participants were young adult students, aged 18-35, both male and female.The research questionnaire consisted of four areas: body appreciation, beauty ideals internalization, dietary habits, and physical activity.The questionnaire was created in Google Forms online survey platform.The questionnaire was filled out anonymously Result and Discussion: Physical dissatisfaction, diet, eating disorders and exercise disorders are found in young adults all over the world.Thorough nutrition helps people understand who they are by reassuring them that they are okay without judging or accepting themselves. Social media can positively influence body image in many ways.A healthy body image is important because it affect self-esteem, self-acceptance, and your attitude towards food and exercise.Keywords: pysical activity, dietary habits, body image, beauty ideals internalization, body appreciation
Procedia PDF Downloads 9610896 The Use of Sustainable Tourism, Decrease Performance Levels, and Change Management for Image Branding as a Contemporary Tool of Foreign Policy
Authors: Mehtab Alam
Abstract:
Sustainable tourism practices require to improve the decreased performance levels in phases of change management for image branding. This paper addresses the innovative approach of using sustainable tourism for image branding as a contemporary tool of foreign policy. The sustainable tourism-based foreign policy promotes cultural values, green tourism, economy, and image management for the avoidance of rising global conflict. The mixed-method approach (quantitative 382 surveys, qualitative 11 interviews at saturation point) implied for the data analysis. The research finding provides the potential of using sustainable tourism by implying skills and knowledge, capacity, and personal factors of change management in improving tourism-based performance levels. It includes the valuable tourism performance role for the success of a foreign policy through sustainable tourism. Change management in tourism-based foreign policy provides the destination readiness for international engagement and curbing of climate issues through green tourism. The research recommends the impact of change management in improving the tourism-based performance levels of image branding for a coercive foreign policy. The paper’s future direction for the immediate implementation of tourism-based foreign policy is to overcome the contemporary issues of travel marketing management, green infrastructure, and cross-border regulation.Keywords: decrease performance levels, change management, sustainable tourism, image branding, foreign policy
Procedia PDF Downloads 12410895 Pitch Processing in Autistic Mandarin-Speaking Children with Hypersensitivityand Hypo-Sensitivity: An Event-Related Potential Study
Authors: Kaiying Lai, Suiping Wang, Luodi Yu, Yang Zhang, Pengmin Qin
Abstract:
Abnormalities in auditory processing are one of the most commonly reported sensory processing impairments in children with Autism Spectrum Disorder (ASD). Tonal language speaker with autism has enhanced neural sensitivity to pitch changes in pure tone. However, not all children with ASD exhibit the same performance in pitch processing due to different auditory sensitivity. The current study aimed to examine auditory change detection in ASD with different auditory sensitivity. K-means clustering method was adopted to classify ASD participants into two groups according to the auditory processing scores of the Sensory Profile, 11 autism with hypersensitivity (mean age = 11.36 ; SD = 1.46) and 18 with hypo-sensitivity (mean age = 10.64; SD = 1.89) participated in a passive auditory oddball paradigm designed for eliciting mismatch negativity (MMN) under the pure tone condition. Results revealed that compared to hypersensitive autism, the children with hypo-sensitivity showed smaller MMN responses to pure tone stimuli. These results suggest that ASD with auditory hypersensitivity and hypo-sensitivity performed differently in processing pure tone, so neural responses to pure tone hold promise for predicting the auditory sensitivity of ASD and targeted treatment in children with ASD.Keywords: ASD, sensory profile, pitch processing, mismatch negativity, MMN
Procedia PDF Downloads 39110894 Understanding the Heart of the Matter: A Pedagogical Framework for Apprehending Successful Second Language Development
Authors: Cinthya Olivares Garita
Abstract:
Untangling language processing in second language development has been either a taken-for-granted and overlooked task for some English language teaching (ELT) instructors or a considerable feat for others. From the most traditional language instruction to the most communicative methodologies, how to assist L2 learners in processing language in the classroom has become a challenging matter in second language teaching. Amidst an ample array of methods, strategies, and techniques to teach a target language, finding a suitable model to lead learners to process, interpret, and negotiate meaning to communicate in a second language has imposed a great responsibility on language teachers; committed teachers are those who are aware of their role in equipping learners with the appropriate tools to communicate in the target language in a 21stcentury society. Unfortunately, one might find some English language teachers convinced that their job is only to lecture students; others are advocates of textbook-based instruction that might hinder second language processing, and just a few might courageously struggle to facilitate second language learning effectively. Grounded on the most representative empirical studies on comprehensible input, processing instruction, and focus on form, this analysis aims to facilitate the understanding of how second language learners process and automatize input and propose a pedagogical framework for the successful development of a second language. In light of this, this paper is structured to tackle noticing and attention and structured input as the heart of processing instruction, comprehensible input as the missing link in second language learning, and form-meaning connections as opposed to traditional grammar approaches to language teaching. The author finishes by suggesting a pedagogical framework involving noticing-attention-comprehensible-input-form (NACIF based on their acronym) to support ELT instructors, teachers, and scholars on the challenging task of facilitating the understanding of effective second language development.Keywords: second language development, pedagogical framework, noticing, attention, comprehensible input, form
Procedia PDF Downloads 2810893 Barrier Lowering in Contacts between Graphene and Semiconductor Materials
Authors: Zhipeng Dong, Jing Guo
Abstract:
Graphene-semiconductor contacts have been extensively studied recently, both as a stand-alone diode device for potential applications in photodetectors and solar cells, and as a building block to vertical transistors. Graphene is a two-dimensional nanomaterial with vanishing density-of-states at the Dirac point, which differs from conventional metal. In this work, image-charge-induced barrier lowering (BL) in graphene-semiconductor contacts is studied and compared to that in metal Schottky contacts. The results show that despite of being a semimetal with vanishing density-of-states at the Dirac point, the image-charge-induced BL is significant. The BL value can be over 50% of that of metal contacts even in an intrinsic graphene contacted to an organic semiconductor, and it increases as the graphene doping increases. The dependences of the BL on the electric field and semiconductor dielectric constant are examined, and an empirical expression for estimating the image-charge-induced BL in graphene-semiconductor contacts is provided.Keywords: graphene, semiconductor materials, schottky barrier, image charge, contacts
Procedia PDF Downloads 30310892 Innovative Techniques of Teaching Henrik Ibsen’s a Doll’s House
Authors: Shilpagauri Prasad Ganpule
Abstract:
The teaching of drama is considered as the most significant and noteworthy area in an ESL classroom. Diverse innovative techniques can be used to make the teaching of drama worthwhile and interesting. The paper presents the different innovative techniques that can be used while teaching Henrik Ibsen’s A Doll’s House [2007]. The innovative techniques facilitate students’ understanding and comprehension of the text. The use of the innovative techniques makes them explore the dramatic text and uncover a multihued arena of meanings hidden in it. They arouse the students’ interest and assist them overcome the difficulties created by the second language. The diverse innovative techniques appeal to the imagination of the students and increase their participation in the classroom. They help the students in the appreciation of the dramatic text and make the teaching learning situation a fruitful experience for both the teacher and students. The students successfully overcome the problem of L2 comprehension and grasp the theme, story line and plot-structure of the play effectively. The innovative techniques encourage a strong sense of participation on the part of the students and persuade them to learn through active participation. In brief, the innovative techniques promote the students to perform various tasks and expedite their learning process. Thus the present paper makes an attempt to present varied innovative techniques that can be used while teaching drama. It strives to demonstrate how the use of innovative techniques improve and enhance the students’ understanding and appreciation of Ibsen’s A Doll’s House [2007].Keywords: ESL classroom, innovative techniques, students’ participation, teaching of drama
Procedia PDF Downloads 62610891 A New Distributed Computing Environment Based On Mobile Agents for Massively Parallel Applications
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
In this paper, we propose a new distributed environment for High Performance Computing (HPC) based on mobile agents. It allows us to perform parallel programs execution as distributed one over a flexible grid constituted by a cooperative mobile agent team works. The distributed program to be performed is encapsulated on team leader agent which deploys its team workers as Agent Virtual Processing Unit (AVPU). Each AVPU is asked to perform its assigned tasks and provides the computational results which make the data and team works tasks management difficult for the team leader agent and that influence the performance computing. In this work we focused on the implementation of the Mobile Provider Agent (MPA) in order to manage the distribution of data and instructions and to ensure a load balancing model. It grants also some interesting mechanisms to manage the others computing challenges thanks to the mobile agents several skills.Keywords: image processing, distributed environment, mobile agents, parallel and distributed computing
Procedia PDF Downloads 41010890 Performance Analysis with the Combination of Visualization and Classification Technique for Medical Chatbot
Authors: Shajida M., Sakthiyadharshini N. P., Kamalesh S., Aswitha B.
Abstract:
Natural Language Processing (NLP) continues to play a strategic part in complaint discovery and medicine discovery during the current epidemic. This abstract provides an overview of performance analysis with a combination of visualization and classification techniques of NLP for a medical chatbot. Sentiment analysis is an important aspect of NLP that is used to determine the emotional tone behind a piece of text. This technique has been applied to various domains, including medical chatbots. In this, we have compared the combination of the decision tree with heatmap and Naïve Bayes with Word Cloud. The performance of the chatbot was evaluated using accuracy, and the results indicate that the combination of visualization and classification techniques significantly improves the chatbot's performance.Keywords: sentimental analysis, NLP, medical chatbot, decision tree, heatmap, naïve bayes, word cloud
Procedia PDF Downloads 7410889 A Pilot Study on the Sensory Processing Difficulty Pattern Association between the Hot and Cold Executive Function Deficits in Attention Deficit Hyperactivity Deficit Child
Authors: Sheng-Fen Fan, Sung-Hui Tseng
Abstract:
Attention deficit hyperactivity deficit (ADHD) child display diverse sensory processing difficulty behaviors. There is less evidence to figure out how the association between executive function and sensory deficit. To determine whether sensory deficit influence the executive functions, we examined sensory processing by SPM and try to indicate hot/cold executive function (EF) by BRIEF2, respectively. We found that the hot executive function deficit might associate with auditory processing in a variety of settings, and vestibular input to maintain balance and upright posture; the cold EF deficit might opposite to the hot EF deficit, the vestibular sensory modulation difficulty association with emotion shifting and emotional regulation. These results suggest that sensory processing might be another consideration factor to influence the higher cognitive control or emotional regulation of EF. Overall, this study indicates the distinction between hot and cold EF impairments with different sensory modulation problem. Moreover, for clinician, it needs more cautious consideration to conduct intervention with ADHD.Keywords: hot executive function, cold executive function, sensory processing, ADHD
Procedia PDF Downloads 28610888 Satellite Image Classification Using Firefly Algorithm
Authors: Paramjit Kaur, Harish Kundra
Abstract:
In the recent years, swarm intelligence based firefly algorithm has become a great focus for the researchers to solve the real time optimization problems. Here, firefly algorithm is used for the application of satellite image classification. For experimentation, Alwar area is considered to multiple land features like vegetation, barren, hilly, residential and water surface. Alwar dataset is considered with seven band satellite images. Firefly Algorithm is based on the attraction of less bright fireflies towards more brightener one. For the evaluation of proposed concept accuracy assessment parameters are calculated using error matrix. With the help of Error matrix, parameters of Kappa Coefficient, Overall Accuracy and feature wise accuracy parameters of user’s accuracy & producer’s accuracy can be calculated. Overall results are compared with BBO, PSO, Hybrid FPAB/BBO, Hybrid ACO/SOFM and Hybrid ACO/BBO based on the kappa coefficient and overall accuracy parameters.Keywords: image classification, firefly algorithm, satellite image classification, terrain classification
Procedia PDF Downloads 40110887 A Comparative Assessment Method For Map Alignment Techniques
Authors: Rema Daher, Theodor Chakhachiro, Daniel Asmar
Abstract:
In the era of autonomous robot mapping, assessing the goodness of the generated maps is important, and is usually performed by aligning them to ground truth. Map alignment is difficult for two reasons: first, the query maps can be significantly distorted from ground truth, and second, establishing what constitutes ground truth for different settings is challenging. Most map alignment techniques to this date have addressed the first problem, while paying too little importance to the second. In this paper, we propose a benchmark dataset, which consists of synthetically transformed maps with their corresponding displacement fields. Furthermore, we propose a new system for comparison, where the displacement field of any map alignment technique can be computed and compared to the ground truth using statistical measures. The local information in displacement fields renders the evaluation system applicable to any alignment technique, whether it is linear or not. In our experiments, the proposed method was applied to different alignment methods from the literature, allowing for a comparative assessment between them all.Keywords: assessment methods, benchmark, image deformation, map alignment, robot mapping, robot motion
Procedia PDF Downloads 11710886 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 13010885 Branding Tourism Destinations; The Trending Initiatives for Edifice Image Choices of Foreign Policy
Authors: Mehtab Alam, Mudiarasan Kuppusamy, Puvaneswaran Kunaserkaran
Abstract:
The purpose of this paper is to bridge the gap and complete the relationship between tourism destinations and image branding as a choice of edifice foreign policy. Such options became a crucial component for individuals interested in leisure and travel activities. The destination management factors have been evaluated and analyzed using the primary and secondary data in a mixed-methods approach (quantitative sample of 384 and qualitative 8 semi-structured interviews at saturated point). The study chose the Environmental Management Accounting (EMA) and Image Restoration (IR) theories, along with a schematic diagram and an analytical framework supported by NVivo software 12, for two locations in Abbottabad, KPK, Pakistan: Shimla Hill and Thandiani. This incorporates the use of PLS-SEM model for assessing validity of data while SPSS for data screening of descriptive statistics. The results show that destination management's promotion of tourism has significantly improved Pakistan's state image. The use of institutional setup, environmental drivers, immigration, security, and hospitality as well as recreational initiatives on destination management is encouraged. The practical ramifications direct the heads of tourism projects, diplomats, directors, and policymakers to complete destination projects before inviting people to Pakistan. The paper provides the extent of knowledge for academic tourism circles to use tourism destinations as brand ambassadors.Keywords: tourism, management, state image, foreign policy, image branding
Procedia PDF Downloads 6910884 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10610883 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 24910882 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 41810881 Transmogrification of the Danse Macabre Image: Capturing the Journey towards Creativity
Authors: Javaria Farooqui
Abstract:
This study, “Transmogrification of the Danse Macabre Image: Capturing the Journey towards Creativity,” traces the evolution of the concept of Danse Macabre. In Every man death takes away the sinful when they least expect it, in Solyman and Perseda everyone falls prey to death irrespective of their deeds and in Tauba-tun-Nasuh, the sinner is plagued. The climatic point in this brief research comes with the Modern texts, The Moon and Sixpence, Roohe-e-Insani and Amédéé, ou Comment s’en débarrasser, when Danse Macabre extends its boundaries, uniting the idea of creativity with death. Similarly in the visual context, Danse Macabre image, initially a horrifying idea, becomes a part of the present day comics and serves an entertaining rather than a cathartic purpose.Keywords: Danse macabre, transmogrification, Medieval, death, character
Procedia PDF Downloads 52010880 Information Needs of Cassava Processors on Small-Scale Cassava Processing in Oyo State, Nigeria
Authors: Rafiat Bolanle Fasasi-Hammed
Abstract:
Cassava is an important food crop in rural households of Nigeria. It has a high potential for product diversification, because it can be processed into various products forms for human consumption and can be made into chips for farm animals, and also starch and starch derivatives. However, cassava roots are highly perishable and contain potentially toxic cyanogenic glycosides which necessitate its processing. Therefore, this study was carried out to assess information needs of cassava processors on food safety practices in Oyo State, Nigeria. Simple random sampling technique was used in the selection of 110 respondents for this study. Descriptive statistics and chi-square were used to analyze the data collected. Results of this study showed that the mean age of the respondents was 39.4 years, majority (78.7%) of the respondents was married, 51.9% had secondary education; 45.8% of the respondents have spent more than 12 years in cassava processing. The mean income realized was ₦26,347.50/month from cassava processing. Information on cassava processing got to the respondents through friends, family and relations (73.6%) and fellow cassava processors (58.6%). Serious constraints identified were ineffective extension agents (93.9%), food safety regulatory agencies (88.1%) and inadequate processing and storage facilities (67.8%). Chi-square results showed that significant relationship existed between socio-economic characteristics of the respondents (χ2 = 29.80, df = 2,), knowledge level (χ2 = 9.26, df = 4), constraints (χ2 = 13.11, df = 2) and information needs at p < 0.05 level of significance. The study recommends that there should be regular training on improved cassava processing methods for the cassava processors in the study area.Keywords: information, needs, cassava, Oyo State, processing
Procedia PDF Downloads 30210879 A Study of Cloud Computing Solution for Transportation Big Data Processing
Authors: Ilgin Gökaşar, Saman Ghaffarian
Abstract:
The need for fast processed big data of transportation ridership (eg., smartcard data) and traffic operation (e.g., traffic detectors data) which requires a lot of computational power is incontrovertible in Intelligent Transportation Systems. Nowadays cloud computing is one of the important subjects and popular information technology solution for data processing. It enables users to process enormous measure of data without having their own particular computing power. Thus, it can also be a good selection for transportation big data processing as well. This paper intends to examine how the cloud computing can enhance transportation big data process with contrasting its advantages and disadvantages, and discussing cloud computing features.Keywords: big data, cloud computing, Intelligent Transportation Systems, ITS, traffic data processing
Procedia PDF Downloads 46710878 An End-to-end Piping and Instrumentation Diagram Information Recognition System
Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha
Abstract:
Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.Keywords: object recognition system, P&ID, symbol recognition, text recognition
Procedia PDF Downloads 15310877 An Image Based Visual Servoing (IBVS) Approach Using a Linear-Quadratic Regulator (LQR) for Quadcopters
Authors: C. Gebauer, C. Henke, R. Vossen
Abstract:
Within the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020, a team of unmanned aerial vehicles (UAV) is used to capture intruder drones by physical interaction. The challenge is motivated by UAV safety. The purpose of this work is to investigate the agility of a quadcopter being controlled visually. The aim is to track and follow a highly dynamic target, e.g., an intruder quadcopter. The following is realized in close range and the opponent has a velocity of up to 10 m/s. Additional limitations are given by the hardware itself, where only monocular vision is present, and no additional knowledge about the targets state is available. An image based visual servoing (IBVS) approach is applied in combination with a Linear Quadratic Regulator (LQR). The IBVS is integrated into the LQR and an optimal trajectory is computed within the projected three-dimensional image-space. The approach has been evaluated on real quadcopter systems in different flight scenarios to demonstrate the system's stability.Keywords: image based visual servoing, quadcopter, dynamic object tracking, linear-quadratic regulator
Procedia PDF Downloads 14910876 Micro-CT Imaging Of Hard Tissues
Authors: Amir Davood Elmi
Abstract:
From the earliest light microscope to the most innovative X-ray imaging techniques, all of them have refined and improved our knowledge about the organization and composition of living tissues. The old techniques are time consuming and ultimately destructive to the tissues under the examination. In recent few decades, thanks to the boost of technology, non-destructive visualization techniques, such as X-ray computed tomography (CT), magnetic resonance imaging (MRI), selective plane illumination microscopy (SPIM), and optical projection tomography (OPT), have come to the forefront. Among these techniques, CT is excellent for mineralized tissues such as bone or dentine. In addition, CT it is faster than other aforementioned techniques and the sample remains intact. In this article, applications, advantages, and limitations of micro-CT is discussed, in addition to some information about micro-CT of soft tissue.Keywords: Micro-CT, hard tissue, bone, attenuation coefficient, rapid prototyping
Procedia PDF Downloads 14210875 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 159