Search results for: image and signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7066

Search results for: image and signal processing

5386 Radiation Usage Impact of on Anti-Nutritional Compounds (Antitrypsin and Phytic Acid) of Livestock and Poultry Foods

Authors: Mohammad Khosravi, Ali Kiani, Behroz Dastar, Parvin Showrang

Abstract:

Review was carried out on important anti-nutritional compounds of livestock and poultry foods and the effect of radiation usage. Nowadays, with advancement in technology, different methods have been considered for the optimum usage of nutrients in livestock and poultry foods. Steaming, extruding, pelleting, and the use of chemicals are the most common and popular methods in food processing. Use of radiation in food processing researches in the livestock and poultry industry is currently highly regarded. Ionizing (electrons, gamma) and non-ionizing beams (microwave and infrared) are the most useable rays in animal food processing. In recent researches, these beams have been used to remove and reduce the anti-nutritional factors and microbial contamination and improve the digestibility of nutrients in poultry and livestock food. The evidence presented will help researchers to recognize techniques of relevance to them. Simplification of some of these techniques, especially in developing countries, must be addressed so that they can be used more widely.

Keywords: antitrypsin, gamma anti-nutritional components, phytic acid, radiation

Procedia PDF Downloads 343
5385 Operator Optimization Based on Hardware Architecture Alignment Requirements

Authors: Qingqing Gai, Junxing Shen, Yu Luo

Abstract:

Due to the hardware architecture characteristics, some operators tend to acquire better performance if the input/output tensor dimensions are aligned to a certain minimum granularity, such as convolution and deconvolution commonly used in deep learning. Furthermore, if the requirements are not met, the general strategy is to pad with 0 to satisfy the requirements, potentially leading to the under-utilization of the hardware resources. Therefore, for the convolution and deconvolution whose input and output channels do not meet the minimum granularity alignment, we propose to transfer the W-dimensional data to the C-dimension for computation (W2C) to enable the C-dimension to meet the hardware requirements. This scheme also reduces the number of computations in the W-dimension. Although this scheme substantially increases computation, the operator’s speed can improve significantly. It achieves remarkable speedups on multiple hardware accelerators, including Nvidia Tensor cores, Qualcomm digital signal processors (DSPs), and Huawei neural processing units (NPUs). All you need to do is modify the network structure and rearrange the operator weights offline without retraining. At the same time, for some operators, such as the Reducemax, we observe that transferring the Cdimensional data to the W-dimension(C2W) and replacing the Reducemax with the Maxpool can accomplish acceleration under certain circumstances.

Keywords: convolution, deconvolution, W2C, C2W, alignment, hardware accelerator

Procedia PDF Downloads 104
5384 A Differential Detection Method for Chip-Scale Spin-Exchange Relaxation Free Atomic Magnetometer

Authors: Yi Zhang, Yuan Tian, Jiehua Chen, Sihong Gu

Abstract:

Chip-scale spin-exchange relaxation free (SERF) atomic magnetometer makes use of millimeter-scale vapor cells micro-fabricated by Micro-electromechanical Systems (MEMS) technique and SERF mechanism, resulting in the characteristics of high spatial resolution and high sensitivity. It is useful for biomagnetic imaging including magnetoencephalography and magnetocardiography. In a prevailing scheme, circularly polarized on-resonance laser beam is adapted for both pumping and probing the atomic polarization. And the magnetic-field-sensitive signal is extracted by transmission laser intensity enhancement as a result of atomic polarization increase on zero field level crossing resonance. The scheme is very suitable for integration, however, the laser amplitude modulation (AM) noise and laser frequency modulation to amplitude modulation (FM-AM) noise is superimposed on the photon shot noise reducing the signal to noise ratio (SNR). To suppress AM and FM-AM noise the paper puts forward a novel scheme which adopts circularly polarized on-resonance light pumping and linearly polarized frequency-detuning laser probing. The transmission beam is divided into transmission and reflection beams by a polarization analyzer, the angle between the analyzer's transmission polarization axis and frequency-detuning laser polarization direction is set to 45°. The magnetic-field-sensitive signal is extracted by polarization rotation enhancement of frequency-detuning laser which induces two beams intensity difference increase as the atomic polarization increases. Therefore, AM and FM-AM noise in two beams are common-mode and can be almost entirely canceled by differential detection. We have carried out an experiment to study our scheme. The experiment reveals that the noise in the differential signal is obviously smaller than that in each beam. The scheme is promising to be applied for developing more sensitive chip-scale magnetometer.

Keywords: atomic magnetometer, chip scale, differential detection, spin-exchange relaxation free

Procedia PDF Downloads 170
5383 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 252
5382 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 157
5381 Wasteless Solid-Phase Method for Conversion of Iron Ores Contaminated with Silicon and Phosphorus Compounds

Authors: А. V. Panko, Е. V. Ablets, I. G. Kovzun, М. А. Ilyashov

Abstract:

Based upon generalized analysis of modern know-how in the sphere of processing, concentration and purification of iron-ore raw materials (IORM), in particular, the most widespread ferrioxide-silicate materials (FOSM), containing impurities of phosphorus and other elements compounds, noted special role of nano technological initiatives in improvement of such processes. Considered ideas of role of nano particles in processes of FOSM carbonization with subsequent direct reduction of ferric oxides contained in them to metal phase, as well as in processes of alkali treatment and separation of powered iron from phosphorus compounds. Using the obtained results the wasteless solid-phase processing, concentration and purification of IORM and FOSM from compounds of phosphorus, silicon and other impurities excelling known methods of direct iron reduction from iron ores and metallurgical slimes.

Keywords: iron ores, solid-phase reduction, nanoparticles in reduction and purification of iron from silicon and phosphorus, wasteless method of ores processing

Procedia PDF Downloads 488
5380 Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features

Authors: Hrishi Rakshit, Pooneh Bagheri Zadeh

Abstract:

This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods.

Keywords: drone classifications, deep convolutional neural network, hyperparameters, drone audio signal

Procedia PDF Downloads 104
5379 An Efficient Separation for Convolutive Mixtures

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Dylan Menzies, Ismail Shahin

Abstract:

This paper describes a new efficient blind source separation method; in this method we use a non-uniform filter bank and a new structure with different sub-bands. This method provides a reduced permutation and increased convergence speed comparing to the full-band algorithm. Recently, some structures have been suggested to deal with two problems: reducing permutation and increasing the speed of convergence of the adaptive algorithm for correlated input signals. The permutation problem is avoided with the use of adaptive filters of orders less than the full-band adaptive filter, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full-band, and can promote better rates of convergence.

Keywords: Blind source separation, estimates, full-band, mixtures, sub-band

Procedia PDF Downloads 445
5378 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 137
5377 Employee Branding: An Exploratory Study Applied to Nurses in an Organization

Authors: Pawan Hinge, Priya Gupta

Abstract:

Due to cutting edge competitions between organizations and war for talent, the workforce as an asset is gaining significance. The employees are considered as the brand ambassadors of an organization, and their interactions with the clients and customers might impact directly or indirectly on the overall value of the organization. Especially, organizations in the healthcare industry the value of an organization in the perception of their employees can be one of the revenue generating and talent retention strategy. In such context, it is essential to understand that the brand awareness among employees can effect on employer brand image and brand value since the brand ambassadors are the interface between organization and customers and clients. In this exploratory study, we have adopted both quantitative and qualitative approaches for data analysis. Our study shows existing variation among nurses working in different business units of the same organization in terms of their customer interface or interactions and brand awareness.

Keywords: brand awareness, brand image, brand value, customer interface

Procedia PDF Downloads 285
5376 Video Club as a Pedagogical Tool to Shift Teachers’ Image of the Child

Authors: Allison Tucker, Carolyn Clarke, Erin Keith

Abstract:

Introduction: In education, the determination to uncover privileged practices requires critical reflection to be placed at the center of both pre-service and in-service teacher education. Confronting deficit thinking about children’s abilities and shifting to holding an image of the child as capable and competent is necessary for teachers to engage in responsive pedagogy that meets children where they are in their learning and builds on strengths. This paper explores the ways in which early elementary teachers' perceptions of the assets of children might shift through the pedagogical use of video clubs. Video club is a pedagogical practice whereby teachers record and view short videos with the intended purpose of deepening their practices. The use of video club as a learning tool has been an extensively documented practice. In this study, a video club is used to watch short recordings of playing children to identify the assets of their students. Methodology: The study on which this paper is based asks the question: What are the ways in which teachers’ image of the child and teaching practices evolve through the use of video club focused on the strengths of children demonstrated during play? Using critical reflection, it aims to identify and describe participants’ experiences of examining their personally held image of the child through the pedagogical tool video club, and how that image influences their practices, specifically in implementing play pedagogy. Teachers enrolled in a graduate-level play pedagogy course record and watch videos of their own students as a means to notice and reflect on the learning that happens during play. Using a co-constructed viewing protocol, teachers identify student strengths and consider their pedagogical responses. Video club provides a framework for teachers to critically reflect in action, return to the video to rewatch the children or themselves and discuss their noticings with colleagues. Critical reflection occurs when there is focused attention on identifying the ways in which actions perpetuate or challenge issues of inherent power in education. When the image of the child held by the teacher is from a deficit position and is influenced by hegemonic dimensions of practice, critical reflection is essential in naming and addressing power imbalances, biases, and practices that are harmful to children and become barriers to their thriving. The data is comprised of teacher reflections, analyzed using phenomenology. Phenomenology seeks to understand and appreciate how individuals make sense of their experiences. Teacher reflections are individually read, and researchers determine pools of meaning. Categories are identified by each researcher, after which commonalities are named through a recursive process of returning to the data until no more themes emerge or saturation is reached. Findings: The final analysis and interpretation of the data are forthcoming. However, emergent analysis of the data collected using teacher reflections reveals the ways in which the use of video club grew teachers’ awareness of their image of the child. It shows video club as a promising pedagogical tool when used with in-service teachers to prompt opportunities for play and to challenge deficit thinking about children and their abilities to thrive in learning.

Keywords: asset-based teaching, critical reflection, image of the child, video club

Procedia PDF Downloads 105
5375 Developing Rice Disease Analysis System on Mobile via iOS Operating System

Authors: Rujijan Vichivanives, Kittiya Poonsilp, Canasanan Wanavijit

Abstract:

This research aims to create mobile tools to analyze rice disease quickly and easily. The principle of object-oriented software engineering and objective-C language were used for software development methodology and the principle of decision tree technique was used for analysis method. Application users can select the features of rice disease or the color appears on the rice leaves for recognition analysis results on iOS mobile screen. After completing the software development, unit testing and integrating testing method were used to check for program validity. In addition, three plant experts and forty farmers have been assessed for usability and benefit of this system. The overall of users’ satisfaction was found in a good level, 57%. The plant experts give a comment on the addition of various disease symptoms in the database for more precise results of the analysis. For further research, it is suggested that image processing system should be developed as a tool that allows users search and analyze for rice diseases more convenient with great accuracy.

Keywords: rice disease, data analysis system, mobile application, iOS operating system

Procedia PDF Downloads 287
5374 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 344
5373 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 93
5372 An Optimal Steganalysis Based Approach for Embedding Information in Image Cover Media with Security

Authors: Ahlem Fatnassi, Hamza Gharsellaoui, Sadok Bouamama

Abstract:

This paper deals with the study of interest in the fields of Steganography and Steganalysis. Steganography involves hiding information in a cover media to obtain the stego media in such a way that the cover media is perceived not to have any embedded message for its unintended recipients. Steganalysis is the mechanism of detecting the presence of hidden information in the stego media and it can lead to the prevention of disastrous security incidents. In this paper, we provide a critical review of the steganalysis algorithms available to analyze the characteristics of an image stego media against the corresponding cover media and understand the process of embedding the information and its detection. We anticipate that this paper can also give a clear picture of the current trends in steganography so that we can develop and improvise appropriate steganalysis algorithms.

Keywords: optimization, heuristics and metaheuristics algorithms, embedded systems, low-power consumption, steganalysis heuristic approach

Procedia PDF Downloads 292
5371 Mixotropohic Growth of Chlorella sp. on Raw Food Processing Industrial Wastewater: Effect of COD Tolerance

Authors: Suvidha Gupta, R. A. Pandey, Sanjay Pawar

Abstract:

The effluents from various food processing industries are found with high BOD, COD, suspended solids, nitrate, and phosphate. Mixotrophic growth of microalgae using food processing industrial wastewater as an organic carbon source has emerged as more effective and energy intensive means for the nutrient removal and COD reduction. The present study details the treatment of non-sterilized unfiltered food processing industrial wastewater by microalgae for nutrient removal as well as to determine the tolerance to COD by taking different dilutions of wastewater. In addition, the effect of different inoculum percentages of microalgae on removal efficiency of the nutrients for given dilution has been studied. To see the effect of dilution and COD tolerance, the wastewater having initial COD 5000 mg/L (±5), nitrate 28 mg/L (±10), and phosphate 24 mg/L (±10) was diluted to get COD of 3000 mg/L and 1000 mg/L. The experiments were carried out in 1L conical flask by intermittent aeration with different inoculum percentage i.e. 10%, 20%, and 30% of Chlorella sp. isolated from nearby area of NEERI, Nagpur. The experiments were conducted for 6 days by providing 12:12 light- dark period and determined various parameters such as COD, TOC, NO3-- N, PO4-- P, and total solids on daily basis. Results revealed that, for 10% and 20% inoculum, over 90% COD and TOC reduction was obtained with wastewater containing COD of 3000 mg/L whereas over 80% COD and TOC reduction was obtained with wastewater containing COD of 1000 mg/L. Moreover, microalgae was found to tolerate wastewater containing COD 5000 mg/L and obtained over 60% and 80% reduction in COD and TOC respectively. The obtained results were found similar with 10% and 20% inoculum in all COD dilutions whereas for 30% inoculum over 60% COD and 70% TOC reduction was obtained. In case of nutrient removal, over 70% nitrate removal and 45% phosphate removal was obtained with 20% inoculum in all dilutions. The obtained results indicated that Microalgae assisted nutrient removal gives maximum COD and TOC reduction with 3000 mg/L COD and 20% inoculum. Hence, microalgae assisted wastewater treatment is not only effective for removal of nutrients but also can tolerate high COD up to 5000 mg/L and solid content.

Keywords: Chlorella sp., chemical oxygen demand, food processing industrial wastewater, mixotrophic growth

Procedia PDF Downloads 332
5370 Automated 3D Segmentation System for Detecting Tumor and Its Heterogeneity in Patients with High Grade Ovarian Epithelial Cancer

Authors: Dimitrios Binas, Marianna Konidari, Charis Bourgioti, Lia Angela Moulopoulou, Theodore Economopoulos, George Matsopoulos

Abstract:

High grade ovarian epithelial cancer (OEC) is fatal gynecological cancer and the poor prognosis of this entity is closely related to considerable intratumoral genetic heterogeneity. By examining imaging data, it is possible to assess the heterogeneity of tumorous tissue. This study proposes a methodology for aligning, segmenting and finally visualizing information from various magnetic resonance imaging series in order to construct 3D models of heterogeneity maps from the same tumor in OEC patients. The proposed system may be used as an adjunct digital tool by health professionals for personalized medicine, as it allows for an easy visual assessment of the heterogeneity of the examined tumor.

Keywords: image segmentation, ovarian epithelial cancer, quantitative characteristics, image registration, tumor visualization

Procedia PDF Downloads 213
5369 High-Temperature Behavior of Boiler Steel by Friction Stir Processing

Authors: Supreet Singh, Manpreet Kaur, Manoj Kumar

Abstract:

High temperature corrosion is an imperative material degradation method experienced in thermal power plants and other energy generation sectors. Metallic materials such as ferritic steels have special properties such as easy fabrication and machinibilty, low cost, but a serious drawback of these materials is the worsening in properties initiating from the interaction with the environments. The metallic materials do not endure higher temperatures for extensive period of time because of their poor corrosion resistance. Friction Stir Processing (FSP), has emerged as the potent surface modification means and control of microstructure in thermo mechanically heat affecting zones of various metal alloys. In the current research work, FSP was done on the boiler tube of SA 210 Grade A1 material which is regularly used by thermal power plants. The strengthening of SA210 Grade A1 boiler steel through microstructural refinement by Friction Stir Processing (FSP) and analyze the effect of the same on high temperature corrosion behavior. The high temperature corrosion performance of the unprocessed and the FSPed specimens were evaluated in the laboratory using molten salt environment of Na₂SO₄-82%Fe₂(SO₄). The unprocessed and FSPed low carbon steel Gr A1 evaluation was done in terms of microstructure, corrosion resistance, mechanical properties like hardness- tensile. The in-depth characterization was done by EBSD, SEM/EDS and X-ray mapping analyses with an aim to propose the mechanism behind high temperature corrosion behavior of the FSPed steel.

Keywords: boiler steel, characterization, corrosion, EBSD/SEM/EDS/XRD, friction stir processing

Procedia PDF Downloads 238
5368 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples

Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann

Abstract:

Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.

Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing

Procedia PDF Downloads 283
5367 Understanding the Heart of the Matter: A Pedagogical Framework for Apprehending Successful Second Language Development

Authors: Cinthya Olivares Garita

Abstract:

Untangling language processing in second language development has been either a taken-for-granted and overlooked task for some English language teaching (ELT) instructors or a considerable feat for others. From the most traditional language instruction to the most communicative methodologies, how to assist L2 learners in processing language in the classroom has become a challenging matter in second language teaching. Amidst an ample array of methods, strategies, and techniques to teach a target language, finding a suitable model to lead learners to process, interpret, and negotiate meaning to communicate in a second language has imposed a great responsibility on language teachers; committed teachers are those who are aware of their role in equipping learners with the appropriate tools to communicate in the target language in a 21stcentury society. Unfortunately, one might find some English language teachers convinced that their job is only to lecture students; others are advocates of textbook-based instruction that might hinder second language processing, and just a few might courageously struggle to facilitate second language learning effectively. Grounded on the most representative empirical studies on comprehensible input, processing instruction, and focus on form, this analysis aims to facilitate the understanding of how second language learners process and automatize input and propose a pedagogical framework for the successful development of a second language. In light of this, this paper is structured to tackle noticing and attention and structured input as the heart of processing instruction, comprehensible input as the missing link in second language learning, and form-meaning connections as opposed to traditional grammar approaches to language teaching. The author finishes by suggesting a pedagogical framework involving noticing-attention-comprehensible-input-form (NACIF based on their acronym) to support ELT instructors, teachers, and scholars on the challenging task of facilitating the understanding of effective second language development.

Keywords: second language development, pedagogical framework, noticing, attention, comprehensible input, form

Procedia PDF Downloads 29
5366 The Design of Imaginable Urban Road Landscape

Authors: Wang Zhenzhen, Wang Xu, Hong Liangping

Abstract:

With the rapid development of cities, the way that people commute has changed greatly, meanwhile, people turn to require more on physical and psychological aspects in the contemporary world. However, the current urban road landscape ignores these changes, for example, those road landscape elements are boring, confusing, fragmented and lack of integrity and hierarchy. Under such current situation, in order to shape beautiful, identifiable and unique road landscape, this article concentrates on the target of imaginability. This paper analyses the main elements of the urban road landscape, the concept of image and its generation mechanism, and then discusses the necessity and connotation of building imaginable urban road landscape as well as the main problems existing in current urban road landscape in terms of imaginability. Finally, this paper proposes how to design imaginable urban road landscape in details based on a specific case.

Keywords: identifiability, imaginability, road landscape, the image of the city

Procedia PDF Downloads 442
5365 A DNA-Based Nano-biosensor for the Rapid Detection of the Dengue Virus in Mosquito

Authors: Lilia M. Fernando, Matthew K. Vasher, Evangelyn C. Alocilja

Abstract:

This paper describes the development of a DNA-based nanobiosensor to detect the dengue virus in mosquito using electrically active magnetic (EAM) nanoparticles as the concentrator and electrochemical transducer. The biosensor detection encompasses two sets of oligonucleotide probes that are specific to the dengue virus: the detector probe labeled with the EAM nanoparticles and the biotinylated capture probe. The DNA targets are double hybridized to the detector and the capture probes and concentrated from nonspecific DNA fragments by applying a magnetic field. Subsequently, the DNA sandwiched targets (EAM-detector probe–DNA target–capture probe-biotin) are captured on streptavidin modified screen printed carbon electrodes through the biotinylated capture probes. Detection is achieved electrochemically by measuring the oxidation–reduction signal of the EAM nanoparticles. Results indicate that the biosensor is able to detect the redox signal of the EAM nanoparticles at dengue DNA concentrations as low as 10 ng/ul.

Keywords: dengue, magnetic nanoparticles, mosquito, nanobiosensor

Procedia PDF Downloads 366
5364 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.

Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution

Procedia PDF Downloads 372
5363 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148
5362 Representation of the Iranian Community in the Videos of the Instagram Page of the World Health Organization Representative in Iran

Authors: Naeemeh Silvari

Abstract:

The phenomenon of the spread and epidemic of the corona virus caused many aspects of the social life of the people of the world to face various challenges. In this regard, and in order to improve the living conditions of the people, the World Health Organization has tried to publish the necessary instructions for its contacts in the world in the form of its media capacities. Considering the importance of cultural differences in the discussion of health communication and the distinct needs of people in different societies, some production contents were produced and published exclusively. This research has studied six videos published on the official page of the World Health Organization in Iran as a case study. The published content has the least semantic affinity with Iranian culture, and it has been tried to show a uniform image of the Middle East with the predominance of the image of the culture of the developing Arab countries.

Keywords: corona, representation, semiotics, instagram, health communication

Procedia PDF Downloads 93
5361 Development of Fake News Model Using Machine Learning through Natural Language Processing

Authors: Sajjad Ahmed, Knut Hinkelmann, Flavio Corradini

Abstract:

Fake news detection research is still in the early stage as this is a relatively new phenomenon in the interest raised by society. Machine learning helps to solve complex problems and to build AI systems nowadays and especially in those cases where we have tacit knowledge or the knowledge that is not known. We used machine learning algorithms and for identification of fake news; we applied three classifiers; Passive Aggressive, Naïve Bayes, and Support Vector Machine. Simple classification is not completely correct in fake news detection because classification methods are not specialized for fake news. With the integration of machine learning and text-based processing, we can detect fake news and build classifiers that can classify the news data. Text classification mainly focuses on extracting various features of text and after that incorporating those features into classification. The big challenge in this area is the lack of an efficient way to differentiate between fake and non-fake due to the unavailability of corpora. We applied three different machine learning classifiers on two publicly available datasets. Experimental analysis based on the existing dataset indicates a very encouraging and improved performance.

Keywords: fake news detection, natural language processing, machine learning, classification techniques.

Procedia PDF Downloads 167
5360 Improving the LDMOS Temperature Compensation Bias Circuit to Optimize Back-Off

Authors: Antonis Constantinides, Christos Yiallouras, Christakis Damianou

Abstract:

The application of today's semiconductor transistors in high power UHF DVB-T linear amplifiers has evolved significantly by utilizing LDMOS technology. This fact provides engineers with the option to design a single transistor signal amplifier which enables output power and linearity that was unobtainable previously using bipolar junction transistors or later type first generation MOSFETS. The quiescent current stability in terms of thermal variations of the LDMOS guarantees a robust operation in any topology of DVB-T signal amplifiers. Otherwise, progressively uncontrolled heat dissipation enhancement on the LDMOS case can degrade the amplifier’s crucial parameters in regards to the gain, linearity, and RF stability, resulting in dysfunctional operation or a total destruction of the unit. This paper presents one more sophisticated approach from the traditional biasing circuits used so far in LDMOS DVB-T amplifiers. It utilizes a microprocessor control technology, providing stability in topologies where IDQ must be perfectly accurate.

Keywords: LDMOS, amplifier, back-off, bias circuit

Procedia PDF Downloads 339
5359 Biosignal Recognition for Personal Identification

Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor

Abstract:

A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.

Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification

Procedia PDF Downloads 280
5358 Investigating the Vehicle-Bicyclists Conflicts using LIDAR Sensor Technology at Signalized Intersections

Authors: Alireza Ansariyar, Mansoureh Jeihani

Abstract:

Light Detection and Ranging (LiDAR) sensors are capable of recording traffic data including the number of passing vehicles and bicyclists, the speed of vehicles and bicyclists, and the number of conflicts among both road users. In order to collect real-time traffic data and investigate the safety of different road users, a LiDAR sensor was installed at Cold Spring Ln – Hillen Rd intersection in Baltimore City. The frequency and severity of collected real-time conflicts were analyzed and the results highlighted that 122 conflicts were recorded over a 10-month time interval from May 2022 to February 2023. By using an innovative image-processing algorithm, a new safety Measure of Effectiveness (MOE) was proposed to recognize the critical zones for bicyclists entering each zone. Considering the trajectory of conflicts, the results of the analysis demonstrated that conflicts in the northern approach (zone N) are more frequent and severe. Additionally, sunny weather is more likely to cause severe vehicle-bike conflicts.

Keywords: LiDAR sensor, post encroachment time threshold (PET), vehicle-bike conflicts, a measure of effectiveness (MOE), weather condition

Procedia PDF Downloads 236
5357 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 533