Search results for: metallurgical image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5940

Search results for: metallurgical image processing

3570 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables

Authors: Marianna Maiaru, Gregory M. Odegard

Abstract:

During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.

Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling

Procedia PDF Downloads 92
3569 Speed up Vector Median Filtering by Quasi Euclidean Norm

Authors: Vinai K. Singh

Abstract:

For reducing impulsive noise without degrading image contours, median filtering is a powerful tool. In multiband images as for example colour images or vector fields obtained by optic flow computation, a vector median filter can be used. Vector median filters are defined on the basis of a suitable distance, the best performing distance being the Euclidean. Euclidean distance is evaluated by using the Euclidean norms which is quite demanding from the point of view of computation given that a square root is required. In this paper an optimal piece-wise linear approximation of the Euclidean norm is presented which is applied to vector median filtering.

Keywords: euclidean norm, quasi euclidean norm, vector median filtering, applied mathematics

Procedia PDF Downloads 474
3568 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 75
3567 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
3566 Bactericidal Efficacy of Quaternary Ammonium Compound on Carriers with Food Additive Grade Calcium Hydroxide against Salmonella Infantis and Escherichia coli

Authors: M. Shahin Alam, Satoru Takahashi, Mariko Itoh, Miyuki Komura, Mayuko Suzuki, Natthanan Sangsriratanakul, Kazuaki Takehara

Abstract:

Cleaning and disinfection are key components of routine biosecurity in livestock farming and food processing industry. The usage of suitable disinfectants and their proper concentration are important factors for a successful biosecurity program. Disinfectants have optimum bactericidal and virucidal efficacies at temperatures above 20°C, but very few studies on application and effectiveness of disinfectants at low temperatures have been done. In the present study, the bactericidal efficacies of food additive grade calcium hydroxide (FdCa(OH)), quaternary ammonium compound (QAC) and their mixture, were investigated under different conditions, including time, organic materials (fetal bovine serum: FBS) and temperature, either in suspension or in carrier test. Salmonella Infantis and Escherichia coli, which are the most prevalent gram negative bacteria in commercial poultry housing and food processing industry, were used in this study. Initially, we evaluated these disinfectants at two different temperatures (4°C and room temperature (RT) (25°C ± 2°C)) and 7 contact times (0, 5 and 30 sec, 1, 3, 20 and 30 min), with suspension tests either in the presence or absence of 5% FBS. Secondly, we investigated the bactericidal efficacies of these disinfectants by carrier tests (rubber, stainless steel and plastic) at same temperatures and 4 contact times (30 sec, 1, 3, and 5 min). Then, we compared the bactericidal efficacies of each disinfectant within their mixtures, as follows. When QAC was diluted with redistilled water (dW2) at 1: 500 (QACx500) to obtain the final concentration of didecyl-dimethylammonium chloride (DDAC) of 200 ppm, it could inactivate Salmonella Infantis within 5 sec at RT either with or without 5% FBS in suspension test; however, at 4°C it required 30 min in presence of 5% FBS. FdCa(OH)2 solution alone could inactivate bacteria within 1 min both at RT and 4°C even with 5% FBS. While FdCa(OH)2 powder was added at final concentration 0.2% to QACx500 (Mix500), the mixture could inactivate bacteria within 30 sec and 5 sec, respectively, with or without 5% FBS at 4°C. The findings from the suspension test indicated that low temperature inhibited the bactericidal efficacy of QAC, whereas Mix500 was effective, regardless of short contact time and low temperature, even with 5% FBS. In the carrier test, single disinfectant required bit more time to inactivate bacteria on rubber and plastic surfaces than on stainless steel. However, Mix500 could inactivate S. Infantis on rubber, stainless steel and plastic surfaces within 30 sec and 1 min, respectively, at RT and 4°C; but, for E. coli, it required only 30 sec at both temperatures. So, synergistic effects were observed on different carriers at both temperatures. For a successful enhancement of biosecurity during winter, the disinfectants should be selected that could have short contact times with optimum efficacy against the target pathogen. The present study findings help farmers to make proper strategies for application of disinfectants in their livestock farming and food processing industry.

Keywords: carrier, food additive grade calcium hydroxide (FdCa(OH)₂), quaternary ammonium compound, synergistic effects

Procedia PDF Downloads 294
3565 Approaches to Diagnosis of Ectopic Solid Organs in the Abdominopelvic Cavity

Authors: Van-Ngoc-Cuong Le, Ngoc-Quy Le

Abstract:

Approaches to the diagnosis of ectopic solid organs in the abdominopelvic cavity include Accessory liver lobe, Accessory spleens (ectopic splenic tissue), Wandering spleen, Ectopic pancreatic tissue, Ectopic kidney (Pancake kidney), Cryptorchidism (undescended testis, ectopic testis), Ectopic endometriosis. The application of diagnostic imaging techniques, of which magnetic resonance imaging is the most important, includes a clinical case study and reports. Ectopic organs and tumors are easy to confuse. This is a concern, as well as practical challenges encountered and solutions adopted in the fields of Image Analysis.

Keywords: ectopic, accessory, wandering, tumor

Procedia PDF Downloads 4
3564 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 39
3563 Building Atmospheric Moisture Diagnostics: Environmental Monitoring and Data Collection

Authors: Paula Lopez-Arce, Hector Altamirano, Dimitrios Rovas, James Berry, Bryan Hindle, Steven Hodgson

Abstract:

Efficient mould remediation and accurate moisture diagnostics leading to condensation and mould growth in dwellings are largely untapped. Number of factors are contributing to the rising trend of excessive moisture in homes mainly linked with modern living, increased levels of occupation and rising fuel costs, as well as making homes more energy efficient. Environmental monitoring by means of data collection though loggers sensors and survey forms has been performed in a range of buildings from different UK regions. Air and surface temperature and relative humidity values of residential areas affected by condensation and/or mould issues were recorded. Additional measurements were taken through different trials changing type, location, and position of loggers. In some instances, IR thermal images and ventilation rates have also been acquired. Results have been interpreted together with environmental key parameters by processing and connecting data from loggers and survey questionnaires, both in buildings with and without moisture issues. Monitoring exercises carried out during Winter and Spring time show the importance of developing and following accurate protocols for guidance to obtain consistent, repeatable and comparable results and to improve the performance of environmental monitoring. A model and a protocol are being developed to build a diagnostic tool with the goal of performing a simple but precise residential atmospheric moisture diagnostics to distinguish the cause entailing condensation and mould generation, i.e., ventilation, insulation or heating systems issue. This research shows the relevance of monitoring and processing environmental data to assign moisture risk levels and determine the origin of condensation or mould when dealing with a building atmospheric moisture excess.

Keywords: environmental monitoring, atmospheric moisture, protocols, mould

Procedia PDF Downloads 139
3562 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 115
3561 The Impacts of Internal Employees on Brand Building: A Case Study of Cell Phone

Authors: Adnan Gohar

Abstract:

This research work aims the importance of internal employees in the making of a brand (cell phone) through customer satisfaction which basically explains the connection of internal employees with external customers. This research is designed to measure the satisfaction level of internal employees which further connects to the product evolution as a brand leaving a brand image in the eye of the external customer. The main focus is that internal employees are as important as external customers for the uplift of the product resulting in the brand. Internal employees are individual organization employees, vendors, departments, and distributors.

Keywords: brand building, customer satisfaction, internal employees, mobile franchise

Procedia PDF Downloads 257
3560 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 146
3559 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models

Authors: Ozan Kahraman, Hao Feng

Abstract:

Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.

Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7

Procedia PDF Downloads 366
3558 Intangible Cultural Heritage as a Strategic Place Branding Tool

Authors: L. Ozoliņa

Abstract:

Place branding as a strategic marketing tool is applied in Latvia since 2000. The main objective of the study is to find unique connecting aspects of the intangible cultural heritage elements on the development of sustainable place branding. The study is based on in-depth semi-structured interviews with Latvian place branding experts and content analysis of Latvia's place brand identities. The study indicates place branding as an internal co-creational and educational process of all involved stakeholders of the place and highlights a critical view on the local place branding practices on the notability of the in-depth research of the intangible cultural heritage.

Keywords: belonging, identity, intangible cultural heritage, narrative, self-image, place branding

Procedia PDF Downloads 144
3557 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 80
3556 An Extraction of Cancer Region from MR Images Using Fuzzy Clustering Means and Morphological Operations

Authors: Ramandeep Kaur, Gurjit Singh Bhathal

Abstract:

Cancer diagnosis is very difficult task. Magnetic resonance imaging (MRI) scan is used to produce image of any part of the body and provides an efficient way for diagnosis of cancer or tumor. In existing method, fuzzy clustering mean (FCM) is used for the diagnosis of the tumor. In the proposed method FCM is used to diagnose the cancer of the foot. FCM finds the centroids of the clusters of the foot cancer obtained from MRI images. FCM thresholding result shows the extract region of the cancer. Morphological operations are applied to get extracted region of cancer.

Keywords: magnetic resonance imaging (MRI), fuzzy C mean clustering, segmentation, morphological operations

Procedia PDF Downloads 398
3555 Enhanced Thai Character Recognition with Histogram Projection Feature Extraction

Authors: Benjawan Rangsikamol, Chutimet Srinilta

Abstract:

This research paper deals with extraction of Thai character features using the proposed histogram projection so as to improve the recognition performance. The process starts with transformation of image files into binary files before thinning. After character thinning, the skeletons are entered into the proposed extraction using histogram projection (horizontal and vertical) to extract unique features which are inputs of the subsequent recognition step. The recognition rate with the proposed extraction technique is as high as 97 percent since the technique works very well with the idiosyncrasies of Thai characters.

Keywords: character recognition, histogram projection, multilayer perceptron, Thai character features extraction

Procedia PDF Downloads 464
3554 Friction Stir Processing of the AA7075T7352 Aluminum Alloy Microstructures Mechanical Properties and Texture Characteristics

Authors: Roopchand Tandon, Zaheer Khan Yusufzai, R. Manna, R. K. Mandal

Abstract:

Present work describes microstructures, mechanical properties, and texture characteristics of the friction stir processed AA7075T7352 aluminum alloy. Phases were analyzed with the help of x-ray diffractometre (XRD), transmission electron microscope (TEM) along with the differential scanning calorimeter (DSC). Depth-wise microstructures and dislocation characteristics from the nugget-zone of the friction stir processed specimens were studied using the bright field (BF) and weak beam dark-field (WBDF) TEM micrographs, and variation in the microstructures as well as dislocation characteristics were the noteworthy features found. XRD analysis display changes in the chemistry as well as size of the phases in the nugget and heat affected zones (Nugget and HAZ). Whereas the base metal (BM) microstructures remain un-affected. High density dislocations were noticed in the nugget regions of the processed specimen, along with the formation of dislocation contours and tangles. .The ɳ’ and ɳ phases, along with the GP-Zones were completely dissolved and trapped by the dislocations. Such an observations got corroborated to the improved mechanical as well as stress corrosion cracking (SCC) performances. Bulk texture and residual stress measurements were done by the Panalytical Empyrean MRD system with Co- kα radiation. Nugget zone (NZ) display compressive residual stress as compared to thermo-mechanically(TM) and heat affected zones (HAZ). Typical f.c.c. deformation texture components (e.g. Copper, Brass, and Goss) were seen. Such a phenomenon is attributed to the enhanced hardening as well as other mechanical performance of the alloy. Mechanical characterizations were done using the tensile test and Anton Paar Instrumented Micro Hardness tester. Enhancement in the yield strength value is reported from the 89MPa to the 170MPa; on the other hand, highest hardness value was reported in the nugget-zone of the processed specimens.

Keywords: aluminum alloy, mechanical characterization, texture characterstics, friction stir processing

Procedia PDF Downloads 107
3553 Building and Tree Detection Using Multiscale Matched Filtering

Authors: Abdullah H. Özcan, Dilara Hisar, Yetkin Sayar, Cem Ünsalan

Abstract:

In this study, an automated building and tree detection method is proposed using DSM data and true orthophoto image. A multiscale matched filtering is used on DSM data. Therefore, first watershed transform is applied. Then, Otsu’s thresholding method is used as an adaptive threshold to segment each watershed region. Detected objects are masked with NDVI to separate buildings and trees. The proposed method is able to detect buildings and trees without entering any elevation threshold. We tested our method on ISPRS semantic labeling dataset and obtained promising results.

Keywords: building detection, local maximum filtering, matched filtering, multiscale

Procedia PDF Downloads 320
3552 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 224
3551 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report

Authors: Angelis P. Barlampas

Abstract:

Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.

Keywords: fahr disease, fahr syndrome, CT, brain calcifications

Procedia PDF Downloads 62
3550 Detecting Hate Speech And Cyberbullying Using Natural Language Processing

Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão

Abstract:

Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.

Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning

Procedia PDF Downloads 228
3549 Digital Forgery Detection by Signal Noise Inconsistency

Authors: Bo Liu, Chi-Man Pun

Abstract:

A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.

Keywords: forgery detection, splicing forgery, noise estimation, noise

Procedia PDF Downloads 461
3548 The Job of Rhetoric in Public Relations Practice

Authors: Talal Alqahtani

Abstract:

For all institutions, either public or private, communication is important now more than ever. This is because the importance of communication has grown over the years, and it has the ability to either break or make an organization. With globalization, the changing technology, and other emergent issues that affect organizations, the communication given out has had to be better, sharper, and both proactive and reactive. This is the reason why the importance of public relations has been on the increase. Institutions realize the importance of having a good image and having public relations experts who can effectively manage communication in an institution easily in times of crisis. Public relations itself is not, however, effective, and this has led to the adoption of rhetoric in communication. Rhetoric use has had a long transformation because, in the past, it was only used in politics. Rhetoric in communication has come to be appreciated and adopted by many diverse fields and sectors. This study looks at the job of rhetoric in public relations practice and how it can identify with the administration of an institution's notoriety.

Keywords: communication, notoriety, rhetoric, public relation

Procedia PDF Downloads 234
3547 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU

Authors: David Lloyd, David Owen, Martin Jardine

Abstract:

Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.

Keywords: legislation, overfill, profile, waste

Procedia PDF Downloads 406
3546 Semigroups of Linear Transformations with Fixed Subspaces: Green’s Relations and Ideals

Authors: Yanisa Chaiya, Jintana Sanwong

Abstract:

Let V be a vector space over a field and W a subspace of V. Let Fix(V,W) denote the set of all linear transformations on V with fix all elements in W. In this paper, we show that Fix(V,W) is a semigroup under the composition of maps and describe Green’s relations on this semigroup in terms of images, kernels and the dimensions of subspaces of the quotient space V/W where V/W = {v+W : v is an element in V} with v+W = {v+w : w is an element in W}. Let dim(U) denote the dimension of a vector space U and Vα = {vα : v is an element in V} where vα is an image of v under a linear transformation α. For any cardinal number a let a'= min{b : b > a}. We also show that the ideals of Fix(V,W) are precisely the sets. Fix(r) ={α ∊ Fix(V,W) : dim(Vα/W) < r} where 1 ≤ r ≤ a' and a = dim(V/W). Moreover, we prove that if V is a finite-dimensional vector space, then every ideal of Fix(V,W) is principle.

Keywords: Green’s relations, ideals, linear transformation semi-groups, principle ideals

Procedia PDF Downloads 292
3545 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 303
3544 Greening the Blue: Enzymatic Degradation of Commercially Important Biopolymer Dextran Using Dextranase from Bacillus Licheniformis KIBGE-IB25

Authors: Rashida Rahmat Zohra, Afsheen Aman, Shah Ali Ul Qader

Abstract:

Commercially important biopolymer, dextran, is enzymatically degraded into lower molecular weight fractions of vast industrial potential. Various organisms are associated with dextranase production, among which fungal, yeast and bacterial origins are used for commercial production. Dextranases are used to remove contaminating dextran in sugar processing industry and also used in oral care products for efficient removal of dental plaque. Among the hydrolytic products of dextran, isomaltooligosaccharides have prebiotic effect in humans and reduces the cariogenic effect of sucrose in oral cavity. Dextran derivatives produced by hydrolysis of high molecular polymer are also conjugated with other chemical and metallic compounds for usage in pharmaceutical, fine chemical industry, cosmetics, and food industry. Owing to the vast application of dextran and dextranases, current study focused on purification and analysis of kinetic parameters of dextranase from a newly isolated strain of Bacillus licheniformis KIBGE-IB25. Dextranase was purified up to 35.75 folds with specific activity of 1405 U/mg and molecular weight of 158 kDa. Analysis of kinetic parameters revealed that dextranase performs optimum cleavage of low molecular weight dextran (5000 Da, 0.5%) at 35ºC in 15 min at pH 4.5 with a Km and Vmax of 0.3738 mg/ml and 182.0 µmol/min, respectively. Thermal stability profiling of dextranase showed that it retained 80% activity up to 6 hours at 30-35ºC and remains 90% active at pH 4.5. In short, the dextranase reported here performs rapid cleavage of substrate at mild operational conditions which makes it an ideal candidate for dextran removal in sugar processing industry and for commercial production of low molecular weight oligosaccharides.

Keywords: Bacillus licheniformis, dextranase, gel permeation chromatograpy, enzyme purification, enzyme kinetics

Procedia PDF Downloads 440
3543 Executive Deficits in Non-Clinical Hoarders

Authors: Thomas Heffernan, Nick Neave, Colin Hamilton, Gill Case

Abstract:

Hoarding is the acquisition of and failure to discard possessions, leading to excessive clutter and significant psychological/emotional distress. From a cognitive-behavioural approach, excessive hoarding arises from information-processing deficits, as well as from problems with emotional attachment to possessions and beliefs about the nature of possessions. In terms of information processing, hoarders have shown deficits in executive functions, including working memory, planning, inhibitory control, and cognitive flexibility. However, this previous research is often confounded by co-morbid factors such as anxiety, depression, or obsessive-compulsive disorder. The current study adopted a cognitive-behavioural approach, specifically assessing executive deficits and working memory in a non-clinical sample of hoarders, compared with non-hoarders. In this study, a non-clinical sample of 40 hoarders and 73 non-hoarders (defined by The Savings Inventory-Revised) completed the Adult Executive Functioning Inventory, which measures working memory and inhibition, Dysexecutive Questionnaire-Revised, which measures general executive function and the Hospital Anxiety and Depression Scale, which measures mood. The participant sample was made up of unpaid young adult volunteers who were undergraduate students and who completed the questionnaires on a university campus. The results revealed that, after observing no differences between hoarders and non-hoarders on age, sex, and mood, hoarders reported significantly more deficits in inhibitory control and general executive function when compared with non-hoarders. There was no between-group difference on general working memory. This suggests that non-clinical hoarders have a specific difficulty with inhibition-control, which enables you to resist repeated, unwanted urges. This might explain the hoarder’s inability to resist urges to buy and keep items that are no longer of any practical use. These deficits may be underpinned by general executive function deficiencies.

Keywords: hoarding, memory, executive, deficits

Procedia PDF Downloads 193
3542 On-Road Text Detection Platform for Driver Assistance Systems

Authors: Guezouli Larbi, Belkacem Soundes

Abstract:

The automation of the text detection process can help the human in his driving task. Its application can be very useful to help drivers to have more information about their environment by facilitating the reading of road signs such as directional signs, events, stores, etc. In this paper, a system consisting of two stages has been proposed. In the first one, we used pseudo-Zernike moments to pinpoint areas of the image that may contain text. The architecture of this part is based on three main steps, region of interest (ROI) detection, text localization, and non-text region filtering. Then, in the second step, we present a convolutional neural network architecture (On-Road Text Detection Network - ORTDN) which is considered a classification phase. The results show that the proposed framework achieved ≈ 35 fps and an mAP of ≈ 90%, thus a low computational time with competitive accuracy.

Keywords: text detection, CNN, PZM, deep learning

Procedia PDF Downloads 83
3541 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: fermentation, hydrous bioethanol, fermentation, rain tree pods, village level

Procedia PDF Downloads 295