Search results for: deep convolutional features
4550 Event Related Brain Potentials Evoked by Carmen in Musicians and Dancers
Authors: Hanna Poikonen, Petri Toiviainen, Mari Tervaniemi
Abstract:
Event-related potentials (ERPs) evoked by simple tones in the brain have been extensively studied. However, in reality the music surrounding us is spectrally and temporally complex and dynamic. Thus, the research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation, which, in various forms, has always been an essential part of different cultures. In addition to sensory responses, music elicits vast cognitive and emotional processes in the brain. When compared to laymen, professional musicians have stronger ERP responses in processing individual musical features in simple tone sequences, such as changes in pitch, timbre and harmony. Here we show that the ERP responses evoked by rapid changes in individual musical features are more intense in musicians than in laymen, also while listening to long excerpts of the composition Carmen. Interestingly, for professional dancers, the amplitudes of the cognitive P300 response are weaker than for musicians but still stronger than for laymen. Also, the cognitive P300 latencies of musicians are significantly shorter whereas the latencies of laymen are significantly longer. In contrast, sensory N100 do not differ in amplitude or latency between musicians and laymen. These results, acquired from a novel ERP methodology for natural music, suggest that we can take the leap of studying the brain with long pieces of natural music also with the ERP method of electroencephalography (EEG), as has already been made with functional magnetic resonance (fMRI), as these two brain imaging devices complement each other.Keywords: electroencephalography, expertise, musical features, real-life music
Procedia PDF Downloads 4834549 Anatomical, Light and Scanning Electron Microscopical Study of Ostrich (Struthio camelus) Integument
Authors: Samir El-Gendy, Doaa Zaghloul
Abstract:
The current study dealt with the gross and microscopic anatomy of the integument of male ostrich in addition to the histological features of different areas of skin by light and SEM. The ostrich skin is characterized by prominent feather follicles and bristles. The number of feather follicles was determined per cm2 in different regions. The integument of ostrich had many modifications which appeared as callosities and scales, nail and toe pads. They were sternal, pubic and Achilles tendon callosities. The vacuolated epidermal cells were seen mainly in the skin of legs and to a lesser extent in the skin of back and Achilles areas. Higher lipogenic potential was expressed by epidermis from glabrous areas of ostrich skin. The dermal papillae were found in the skin of feathered area of neck and back and this was not a common finding in bird's skin which may give resistance against shearing forces in these regions of ostrich skin. The thickness of the keratin layer of ostrich varied, being thick and characteristically loose in the skin at legs, very thin and wavy at neck, while at Achilles skin area, scale and toe pad were thick and more compact, with the thickest very dense and wavy keratin layer at the nail. The dermis consisted of superficial layer of dense irregular connective tissue characterized by presence of many vacuoles of different sizes just under the basal lamina of the epithelium of epidermis and deep layer of dense regular connective tissue. This result suggested presence of fat droplets in this layer which may be to overcome the lack of good barrier of cutaneous water loss in epidermis.Keywords: ostrich, light microscopy, scanning electron microscopy, integument, skin modifications
Procedia PDF Downloads 2444548 Numerical Determination of Transition of Cup Height between Hydroforming Processes
Authors: H. Selcuk Halkacı, Mevlüt Türköz, Ekrem Öztürk, Murat Dilmec
Abstract:
Various attempts concerning the low formability issue for lightweight materials like aluminium and magnesium alloys are being investigated in many studies. Advanced forming processes such as hydroforming is one of these attempts. In last decades sheet hydroforming process has an increasing interest, particularly in the automotive and aerospace industries. This process has many advantages such as enhanced formability, the capability to form complex parts, higher dimensional accuracy and surface quality, reduction of tool costs and reduced die wear compared to the conventional sheet metal forming processes. There are two types of sheet hydroforming. One of them is hydromechanical deep drawing (HDD) that is a special drawing process in which pressurized fluid medium is used instead of one of the die half compared to the conventional deep drawing (CDD) process. Another one is sheet hydroforming with die (SHF-D) in which blank is formed with the act of fluid pressure and it takes the shape of die half. In this study, transition of cup height according to cup diameter between the processes was determined by performing simulation of the processes in Finite Element Analysis. Firstly SHF-D process was simulated for 40 mm cup diameter at different cup heights chancing from 10 mm to 30 mm and the cup height to diameter ratio value in which it is not possible to obtain a successful forming was determined. Then the same ratio was checked for a different cup diameter of 60 mm. Then thickness distributions of the cups formed by SHF-D and HDD processes were compared for the cup heights. Consequently, it was found that the thickness distribution in HDD process in the analyses was more uniform.Keywords: finite element analysis, HDD, hydroforming sheet metal forming, SHF-D
Procedia PDF Downloads 4294547 The Application of a Hybrid Neural Network for Recognition of a Handwritten Kazakh Text
Authors: Almagul Assainova , Dariya Abykenova, Liudmila Goncharenko, Sergey Sybachin, Saule Rakhimova, Abay Aman
Abstract:
The recognition of a handwritten Kazakh text is a relevant objective today for the digitization of materials. The study presents a model of a hybrid neural network for handwriting recognition, which includes a convolutional neural network and a multi-layer perceptron. Each network includes 1024 input neurons and 42 output neurons. The model is implemented in the program, written in the Python programming language using the EMNIST database, NumPy, Keras, and Tensorflow modules. The neural network training of such specific letters of the Kazakh alphabet as ә, ғ, қ, ң, ө, ұ, ү, h, і was conducted. The neural network model and the program created on its basis can be used in electronic document management systems to digitize the Kazakh text.Keywords: handwriting recognition system, image recognition, Kazakh font, machine learning, neural networks
Procedia PDF Downloads 2624546 The Traveling Business Websites Quality that Effect to Overall Impression of the Tourist in Thailand
Authors: Preecha Phongpeng
Abstract:
The objectives of this research are to assess the prevalence of travel businesses websites in Thailand, investigate and evaluate the quality of travel business websites in Thailand. The sample size includes 323 websites from the population of 1,458 websites. The study covers 4 types of travel business websites including: 78 general travel agents, 30 online reservation travel agents, 205 hotels, 7 airlines, and 3 car-rental companies with nation-wide operation. The findings indicated that e-tourism in Thailand is at its growth stage, with only 13% of travel businesses having websites, 28% of them providing e-mail and the quality of travel business websites in Thailand was at the average level. Seven common problems were found in websites: lack of travel essential information, insufficient transportation information, lack of navigation tools, lack of link pages to other organizations, lack of safety features, unclear online booking functions, and lack of special features also as well.Keywords: traveling business, website evaluation, e-commerce, e-tourism
Procedia PDF Downloads 3024545 Comparative Study Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine
Procedia PDF Downloads 4104544 Off-Topic Text Detection System Using a Hybrid Model
Authors: Usama Shahid
Abstract:
Be it written documents, news columns, or students' essays, verifying the content can be a time-consuming task. Apart from the spelling and grammar mistakes, the proofreader is also supposed to verify whether the content included in the essay or document is relevant or not. The irrelevant content in any document or essay is referred to as off-topic text and in this paper, we will address the problem of off-topic text detection from a document using machine learning techniques. Our study aims to identify the off-topic content from a document using Echo state network model and we will also compare data with other models. The previous study uses Convolutional Neural Networks and TFIDF to detect off-topic text. We will rearrange the existing datasets and take new classifiers along with new word embeddings and implement them on existing and new datasets in order to compare the results with the previously existing CNN model.Keywords: off topic, text detection, eco state network, machine learning
Procedia PDF Downloads 854543 Alumina Supported Cu-Mn-Cr Catalysts for CO and VOCs oxidation
Authors: Krasimir Ivanov, Elitsa Kolentsova, Dimitar Dimitrov, Petya Petrova, Tatyana Tabakova
Abstract:
This work studies the effect of chemical composition on the activity and selectivity of γ–alumina supported CuO/ MnO2/Cr2O3 catalysts toward deep oxidation of CO, dimethyl ether (DME) and methanol. The catalysts were prepared by impregnation of the support with an aqueous solution of copper nitrate, manganese nitrate and CrO3 under different conditions. Thermal, XRD and TPR analysis were performed. The catalytic measurements of single compounds oxidation were carried out on continuous flow equipment with a four-channel isothermal stainless steel reactor. Flow-line equipment with an adiabatic reactor for simultaneous oxidation of all compounds under the conditions that mimic closely the industrial ones was used. The reactant and product gases were analyzed by means of on-line gas chromatographs. On the basis of XRD analysis it can be concluded that the active component of the mixed Cu-Mn-Cr/γ–alumina catalysts consists of at least six compounds – CuO, Cr2O3, MnO2, Cu1.5Mn1.5O4, Cu1.5Cr1.5O4 and CuCr2O4, depending on the Cu/Mn/Cr molar ratio. Chemical composition strongly influences catalytic properties, this influence being quite variable with regards to the different processes. The rate of CO oxidation rapidly decrease with increasing of chromium content in the active component while for the DME was observed the reverse trend. It was concluded that the best compromise are the catalysts with Cu/(Mn + Cr) molar ratio 1:5 and Mn/Cr molar ratio from 1:3 to 1:4.Keywords: Cu-Mn-Cr oxide catalysts, volatile organic compounds, deep oxidation, dimethyl ether (DME)
Procedia PDF Downloads 3694542 Challenges of Teaching and Learning English Speech Sounds in Five Selected Secondary Schools in Bauchi, Bauchi State, Nigeria
Authors: Mairo Musa Galadima, Phoebe Mshelia
Abstract:
In Nigeria, the national policy of education stipulates that the kindergarten-primary schools and the legislature are to use the three popular Nigerian Languages namely: Hausa, Igbo, and Yoruba. However, the English language seems to be preferred and this calls for this paper. Attempts were made to draw out the challenges faced by learners in understanding English speech sounds and using them to communicate effectively in English; using 5 (five) selected secondary school in Bauchi. It was discovered that challenges abound in the wrong use of stress and intonation, transfer of phonetic features from their first language. Others are inadequately qualified teachers and relevant materials including textbooks. It is recommended that teachers of English should lay more emphasis on the teaching of supra-segmental features and should be encouraged to go for further studies, seminars and refresher courses.Keywords: stress and intonation, phonetic and challenges, teaching and learning English, secondary schools
Procedia PDF Downloads 3524541 Application to Monitor the Citizens for Corona and Get Medical Aids or Assistance from Hospitals
Authors: Vathsala Kaluarachchi, Oshani Wimalarathna, Charith Vandebona, Gayani Chandrarathna, Lakmal Rupasinghe, Windhya Rankothge
Abstract:
It is the fundamental function of a monitoring system to allow users to collect and process data. A worldwide threat, the corona outbreak has wreaked havoc in Sri Lanka, and the situation has gotten out of hand. Since the epidemic, the Sri Lankan government has been unable to establish a systematic system for monitoring corona patients and providing emergency care in the event of an outbreak. Most patients have been held at home because of the high number of patients reported in the nation, but they do not yet have access to a functioning medical system. It has resulted in an increase in the number of patients who have been left untreated because of a lack of medical care. The absence of competent medical monitoring is the biggest cause of mortality for many people nowadays, according to our survey. As a result, a smartphone app for analyzing the patient's state and determining whether they should be hospitalized will be developed. Using the data supplied, we are aiming to send an alarm letter or SMS to the hospital once the system recognizes them. Since we know what those patients need and when they need it, we will put up a desktop program at the hospital to monitor their progress. Deep learning, image processing and application development, natural language processing, and blockchain management are some of the components of the research solution. The purpose of this research paper is to introduce a mechanism to connect hospitals and patients even when they are physically apart. Further data security and user-friendliness are enhanced through blockchain and NLP.Keywords: blockchain, deep learning, NLP, monitoring system
Procedia PDF Downloads 1334540 The Impact of Scientific Content of National Geographic Channel on Drawing Style of Kindergarten Children
Authors: Ahmed Amin Mousa, Mona Yacoub
Abstract:
This study depends on tracking children style through what they have drawn after being introduced to 16 visual content through National Geographic Abu Dhabi Channel programs and the study of the changing features in their drawings before applying the visual act with them. The researchers used Goodenough-Harris Test to analyse children drawings and to extract the features which changed in their drawing before and after the visual content. The results showed a positive change especially in the shapes of animals and their properties. Children become more aware of animals’ shapes. The study sample was 220 kindergarten children divided into 130 girls and 90 boys at the Orman Experimental Language School in Dokki, Giza, Egypt. The study results showed an improvement in children drawing with 85% than they were before watching videos.Keywords: National Geographic, children drawing, kindergarten, Goodenough-Harris Test
Procedia PDF Downloads 1524539 A Quantitative Evaluation of Text Feature Selection Methods
Authors: B. S. Harish, M. B. Revanasiddappa
Abstract:
Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.Keywords: classifiers, feature selection, text classification
Procedia PDF Downloads 4584538 Deep Learning-Based Automated Structure Deterioration Detection for Building Structures: A Technological Advancement for Ensuring Structural Integrity
Authors: Kavita Bodke
Abstract:
Structural health monitoring (SHM) is experiencing growth, necessitating the development of distinct methodologies to address its expanding scope effectively. In this study, we developed automatic structure damage identification, which incorporates three unique types of a building’s structural integrity. The first pertains to the presence of fractures within the structure, the second relates to the issue of dampness within the structure, and the third involves corrosion inside the structure. This study employs image classification techniques to discern between intact and impaired structures within structural data. The aim of this research is to find automatic damage detection with the probability of each damage class being present in one image. Based on this probability, we know which class has a higher probability or is more affected than the other classes. Utilizing photographs captured by a mobile camera serves as the input for an image classification system. Image classification was employed in our study to perform multi-class and multi-label classification. The objective was to categorize structural data based on the presence of cracks, moisture, and corrosion. In the context of multi-class image classification, our study employed three distinct methodologies: Random Forest, Multilayer Perceptron, and CNN. For the task of multi-label image classification, the models employed were Rasnet, Xceptionet, and Inception.Keywords: SHM, CNN, deep learning, multi-class classification, multi-label classification
Procedia PDF Downloads 364537 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1554536 A Research and Application of Feature Selection Based on IWO and Tabu Search
Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu
Abstract:
Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.Keywords: intrusion detection, feature selection, iwo, tabu search
Procedia PDF Downloads 5304535 Improvement of Ground Truth Data for Eye Location on Infrared Driver Recordings
Authors: Sorin Valcan, Mihail Gaianu
Abstract:
Labeling is a very costly and time consuming process which aims to generate datasets for training neural networks in several functionalities and projects. For driver monitoring system projects, the need for labeled images has a significant impact on the budget and distribution of effort. This paper presents the modifications done to an algorithm used for the generation of ground truth data for 2D eyes location on infrared images with drivers in order to improve the quality of the data and performance of the trained neural networks. The algorithm restrictions become tougher, which makes it more accurate but also less constant. The resulting dataset becomes smaller and shall not be altered by any kind of manual label adjustment before being used in the neural networks training process. These changes resulted in a much better performance of the trained neural networks.Keywords: labeling automation, infrared camera, driver monitoring, eye detection, convolutional neural networks
Procedia PDF Downloads 1174534 User-Awareness from Eye Line Tracing During Specification Writing to Improve Specification Quality
Authors: Yoshinori Wakatake
Abstract:
Many defects after the release of software packages are caused due to omissions of sufficient test items in test specifications. Poor test specifications are detected by manual review, which imposes a high human load. The prevention of omissions depends on the end-user awareness of test specification writers. If test specifications were written while envisioning the behavior of end-users, the number of omissions in test items would be greatly reduced. The paper pays attention to the point that writers who can achieve it differ from those who cannot in not only the description richness but also their gaze information. It proposes a method to estimate the degree of user-awareness of writers through the analysis of their gaze information when writing test specifications. We conduct an experiment to obtain the gaze information of a writer of the test specifications. Test specifications are automatically classified using gaze information. In this method, a Random Forest model is constructed for the classification. The classification is highly accurate. By looking at the explanatory variables which turn out to be important variables, we know behavioral features to distinguish test specifications of high quality from others. It is confirmed they are pupil diameter size and the number and the duration of blinks. The paper also investigates test specifications automatically classified with gaze information to discuss features in their writing ways in each quality level. The proposed method enables us to automatically classify test specifications. It also prevents test item omissions, because it reveals writing features that test specifications of high quality should satisfy.Keywords: blink, eye tracking, gaze information, pupil diameter, quality improvement, specification document, user-awareness
Procedia PDF Downloads 644533 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images
Procedia PDF Downloads 4254532 Identification of Deep Landslide on Erzurum-Turkey Highway by Geotechnical and Geophysical Methods and its Prevention
Authors: Neşe Işık, Şenol Altıok, Galip Devrim Eryılmaz, Aydın durukan, Hasan Özgür Daş
Abstract:
In this study, an active landslide zone affecting the road alignment on the Tortum-Uzundere (Erzurum/Turkey) highway was investigated. Due to the landslide movement, problems have occurred in the existing road pavement, which has caused both safety problems and reduced driving comfort in the operation of the road. In order to model the landslide, drilling, geophysical and inclinometer studies were carried out in the field within the scope of ground investigation. Laboratory tests were carried out on soil and rock samples obtained from the borings. When the drilling and geophysical studies were evaluated together, it was determined that the study area has a complex geological structure. In addition, according to the inclinometer results, the direction and speed of movement of the landslide mass were observed. In order to create an idealized geological profile, all field and laboratory studies were evaluated together and then the sliding surface of the landslide was determined by back analysis method. According to the findings obtained, it was determined that the landslide was massively large, and the movement occurred had a deep sliding surface. As a result of the numerical analyses, it was concluded that the Slope angle reduction is the most economical and environmentally friendly method for the control of the landslide mass.Keywords: landslide, geotechnical methods, geophysics, monitoring, highway
Procedia PDF Downloads 684531 Attention-Based Spatio-Temporal Approach for Fire and Smoke Detection
Authors: Alireza Mirrashid, Mohammad Khoshbin, Ali Atghaei, Hassan Shahbazi
Abstract:
In various industries, smoke and fire are two of the most important threats in the workplace. One of the common methods for detecting smoke and fire is the use of infrared thermal and smoke sensors, which cannot be used in outdoor applications. Therefore, the use of vision-based methods seems necessary. The problem of smoke and fire detection is spatiotemporal and requires spatiotemporal solutions. This paper presents a method that uses spatial features along with temporal-based features to detect smoke and fire in the scene. It consists of three main parts; the task of each part is to reduce the error of the previous part so that the final model has a robust performance. This method also uses transformer modules to increase the accuracy of the model. The results of our model show the proper performance of the proposed approach in solving the problem of smoke and fire detection and can be used to increase workplace safety.Keywords: attention, fire detection, smoke detection, spatio-temporal
Procedia PDF Downloads 2034530 Design and Analysis of Proximity Fed Single Band Microstrip Patch Antenna with Parasitic Lines
Authors: Inderpreet Kaur, Sukhjit Kaur, Balwinder Singh Sohi
Abstract:
The design proposed in this paper mainly focuses on implementation of a single feed compact rectangular microstrip patch antenna (MSA) for single band application. The antenna presented here also works in dual band but its best performance has been obtained when optimised to work in single band mode. In this paper, a new feeding structure is applied in the patch antenna design to overcome undesirable features of the earlier multilayer feeding structures while maintaining their interesting features.To make the proposed antenna more efficient the optimization of the antenna design parameters have been done using HFSS’s optometric. For the proposed antenna one resonant frequency has been obtained at 6.03GHz, with Bandwidth of 167MHz and return loss of -33.82db. The characteristics of the designed structure are investigated by using FEM based electromagnetic solver.Keywords: bandwidth, retun loss, parasitic lines, microstrip antenna
Procedia PDF Downloads 4634529 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
Authors: Svitov David, Alyamkin Sergey
Abstract:
The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.Keywords: ArcFace, distillation, face recognition, margin-based softmax
Procedia PDF Downloads 1464528 A Comparative Study for Various Techniques Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifyig the red blood cells as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectivelyKeywords: red blood cells, classification, radial basis function neural networks, suport vector machine, k-nearest neighbors algorithm
Procedia PDF Downloads 4804527 Epileptic Seizure Prediction Focusing on Relative Change in Consecutive Segments of EEG Signal
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
Epilepsy is a common neurological disorders characterized by sudden recurrent seizures. Electroencephalogram (EEG) is widely used to diagnose possible epileptic seizure. Many research works have been devoted to predict epileptic seizure by analyzing EEG signal. Seizure prediction by analyzing EEG signals are challenging task due to variations of brain signals of different patients. In this paper, we propose a new approach for feature extraction based on phase correlation in EEG signals. In phase correlation, we calculate relative change between two consecutive segments of an EEG signal and then combine the changes with neighboring signals to extract features. These features are then used to classify preictal/ictal and interictal EEG signals for seizure prediction. Experiment results show that the proposed method carries good prediction rate with greater consistence for the benchmark data set in different brain locations compared to the existing state-of-the-art methods.Keywords: EEG, epilepsy, phase correlation, seizure
Procedia PDF Downloads 3084526 Addressing Challenging Behaviours of Individuals with Positive Behaviour Support
Authors: Divi Sharma
Abstract:
The emergence of positive behaviour support (PBS) is directly linked to applied behaviour analysis that incorporates evidence-based approaches to addressing ethical challenges and improving autonomy, participation, and the overall quality of life of people living and learning in complex social environments. Its features include lifestyle improvement, collaboration with general caregivers, tracking progress with sound steps, comprehensive performance-based interventions, striving for contextual equality, and ensuring entry and implementation. This document aims to summarize its features with the support of case examples such as involving caregivers to play an active role in behavioural interventions, creating effective interventions within natural practices. Additionally, dealing with lifestyle changes, as well as a wide variety of behavioural changes, develop strong strategies which reduce professional dependence.Keywords: positive behaviour support, quality of life, performance-based interventions, behavioural changes, participation
Procedia PDF Downloads 1704525 Comparison of Early Silicon Oil Removal and Late Silicon Oil Removal in Patients With Rhegmatogenous Retinal Detachment
Authors: Hamidreza Torabi, Mohsen Moghtaderi
Abstract:
Introduction: Currently, deep vitrectomy with silicone oil tamponade is the standard treatment method for patients with Rhegmatogenous Retinal Detachment (RRD). After retinal repair, it is necessary to remove silicone oil from the eye, but the appropriate time to remove the oil and complications related to that time has been less studied. The aim of this study was to compare the results of the early removal of silicone oil with the delayed removal of silicone oil in patients with RRD. Method & material: Patients who were referred to the Ophthalmology Clinic of Baqiyatallah Hospital, Tehran, Iran, due to RRD with detached macula in 2021 & 2022 were evaluated. These patients were treated with deep vitrectomy and silicone oil tamponade. Patients whose retinas were attached after the passage of time were candidates for silicone oil removal (SOR) surgery. For patients in the early SOR group, SOR surgery was performed 3-6 months after the initial vitrectomy surgery, and for the late SOR group, SOR was performed after 6 months after the initial vitrectomy surgery. Results: In this study, 60 patients with RRD were evaluated. 23 (38.3%) patients were in the early group, and 37 (61.7%) patients were in the late group. Based on our findings, it was seen that the mean visual acuity of patients based on the Snellen chart in the early group (0.48 ± 0.23 Decimal) was better than the late group (0.33 ± 0.18 Decimal) (P-value=0.009). Retinal re-detachment has happened only in one patient with early SOR. Conclusion: Early removal of silicone oil (less than 6 months) from the eyes of patients undergoing RRD surgery has been associated with better vision results compared to late removal.Keywords: retinal detachment, vitrectomy, silicone oil, silicone oil removal, visual acuity
Procedia PDF Downloads 774524 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling
Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon
Abstract:
This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester
Procedia PDF Downloads 1214523 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction
Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota
Abstract:
Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety
Procedia PDF Downloads 1634522 Physics-Informed Machine Learning for Displacement Estimation in Solid Mechanics Problem
Authors: Feng Yang
Abstract:
Machine learning (ML), especially deep learning (DL), has been extensively applied to many applications in recently years and gained great success in solving different problems, including scientific problems. However, conventional ML/DL methodologies are purely data-driven which have the limitations, such as need of ample amount of labelled training data, lack of consistency to physical principles, and lack of generalizability to new problems/domains. Recently, there is a growing consensus that ML models need to further take advantage of prior knowledge to deal with these limitations. Physics-informed machine learning, aiming at integration of physics/domain knowledge into ML, has been recognized as an emerging area of research, especially in the recent 2 to 3 years. In this work, physics-informed ML, specifically physics-informed neural network (NN), is employed and implemented to estimate the displacements at x, y, z directions in a solid mechanics problem that is controlled by equilibrium equations with boundary conditions. By incorporating the physics (i.e. the equilibrium equations) into the learning process of NN, it is showed that the NN can be trained very efficiently with a small set of labelled training data. Experiments with different settings of the NN model and the amount of labelled training data were conducted, and the results show that very high accuracy can be achieved in fulfilling the equilibrium equations as well as in predicting the displacements, e.g. in setting the overall displacement of 0.1, a root mean square error (RMSE) of 2.09 × 10−4 was achieved.Keywords: deep learning, neural network, physics-informed machine learning, solid mechanics
Procedia PDF Downloads 1504521 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 326