Search results for: high relative accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24268

Search results for: high relative accuracy

23788 A Distinct Approach Towards Relativity and Time Dilation

Authors: Vipin Choudhary

Abstract:

Time Dilation is the difference in the amount of time two clocks measure in the same inertial frame. Many studies have explored the relativity of time dilation using various approaches. However, the scientific and mathematical explanation of time dilation of moving things and light pulse clocks still has limited research. Therefore, this article examines relativity by utilizing scientific and mathematical approaches; the experience of moving things and light pulse clock ticks have been examined. The study revealed that the time elapsed for the same process is different for the different observers. Here, it showed that the time can be expressed in the form of a wave. In addition, the relative distance changes between the observers, and the observing subject time flows differently for the observer relative to the observing subject.

Keywords: Einstein's special theory of relativity, reference frame, time dilation, length contraction, Lorentz transformation.

Procedia PDF Downloads 36
23787 The Effect of Explicit Focus on Form on Second Language Learning Writing Performance

Authors: Keivan Seyyedi, Leila Esmaeilpour, Seyed Jamal Sadeghi

Abstract:

Investigating the effectiveness of explicit focus on form on the written performance of the EFL learners was the aim of this study. To provide empirical support for this study, sixty male English learners were selected and randomly assigned into two groups of explicit focus on form and meaning focused. Narrative writing was employed for data collection. To measure writing performance, participants were required to narrate a story. They were given 20 minutes to finish the task and were asked to write at least 150 words. The participants’ output was coded then analyzed utilizing Independent t-test for grammatical accuracy and fluency of learners’ performance. Results indicated that learners in explicit focus on form group appear to benefit from error correction and rule explanation as two pedagogical techniques of explicit focus on form with respect to accuracy, but regarding fluency they did not yield any significant differences compared to the participants of meaning-focused group.

Keywords: explicit focus on form, rule explanation, accuracy, fluency

Procedia PDF Downloads 514
23786 A Developmental Survey of Local Stereo Matching Algorithms

Authors: André Smith, Amr Abdel-Dayem

Abstract:

This paper presents an overview of the history and development of stereo matching algorithms. Details from its inception, up to relatively recent techniques are described, noting challenges that have been surmounted across these past decades. Different components of these are explored, though focus is directed towards the local matching techniques. While global approaches have existed for some time, and demonstrated greater accuracy than their counterparts, they are generally quite slow. Many strides have been made more recently, allowing local methods to catch up in terms of accuracy, without sacrificing the overall performance.

Keywords: developmental survey, local stereo matching, rectification, stereo correspondence

Procedia PDF Downloads 293
23785 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment

Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay

Abstract:

Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.

Keywords: machine learning, system performance, performance metrics, IoT, edge

Procedia PDF Downloads 195
23784 Relevance of the Variation in the Angulation of Palatal Throat Form to the Orientation of the Occlusal Plane- A Cephalometric Study

Authors: Sanath Kumar Shetty, Sanya Sinha, K. Kamalakanth Shenoy

Abstract:

The posterior reference for the ala tragal line is a cause of confusion, with different authors suggesting different locations as to the superior, middle or inferior part of the tragus. This study was conducted on 200 subjects to evaluate if any correlation exists between the variation of angulation of palatal throat form and the relative parallelism of occlusal plane to ala-tragal line at different tragal levels. A Custom made Occlusal Plane Analyzer was used to check the parallelism between the ala-tragal line and occlusal plane. A lateral cephalogram was shot for each subject to measure the angulation of the palatal throat form. Fisher’s exact test was used to evaluate the correlation between the angulation of the palatal throat form and the relative parallelism of occlusal plane to the ala tragal line. Also, a classification was formulated for the palatal throat form, based on confidence interval. From the results of the study, the inferior part, middle part and superior part of the tragus were seen as the reference points in 49.5%, 32% and 18.5% of the subjects respectively. Class I palatal throat form (41degree-50 degree), Class II palatal throat form (below 41 degree) and Class III palatal throat form (above 50 degree) were seen in 42%, 43% and 15% of the subjects respectively. It was also concluded that there is no significant correlation between the variation in the angulations of the palatal throat form and the relative parallelism of occlusal plane to the ala-tragal line.

Keywords: Ala-Tragal line, occlusal plane, palatal throat form, cephalometry

Procedia PDF Downloads 311
23783 Effect of Climate Change on Runoff in the Upper Mun River Basin, Thailand

Authors: Preeyaphorn Kosa, Thanutch Sukwimolseree

Abstract:

The climate change is a main parameter which affects the element of hydrological cycle especially runoff. Then, the purpose of this study is to determine the impact of the climate change on surface runoff using land use map on 2008 and daily weather data during January 1, 1979 to September 30, 2010 for SWAT model. SWAT continuously simulate time model and operates on a daily time step at basin scale. The results present that the effect of temperature change cannot be clearly presented on the change of runoff while the rainfall, relative humidity and evaporation are the parameters for the considering of runoff change. If there are the increasing of rainfall and relative humidity, there is also the increasing of runoff. On the other hand, if there is the increasing of evaporation, there is the decreasing of runoff.

Keywords: climate, runoff, SWAT, upper Mun River basin

Procedia PDF Downloads 396
23782 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor

Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh

Abstract:

Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.

Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging

Procedia PDF Downloads 263
23781 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections

Authors: Anthony D. Rhodes, Manan Goel

Abstract:

We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.

Keywords: computer vision, object segmentation, interactive segmentation, model compression

Procedia PDF Downloads 120
23780 A Study of Permission-Based Malware Detection Using Machine Learning

Authors: Ratun Rahman, Rafid Islam, Akin Ahmed, Kamrul Hasan, Hasan Mahmud

Abstract:

Malware is becoming more prevalent, and several threat categories have risen dramatically in recent years. This paper provides a bird's-eye view of the world of malware analysis. The efficiency of five different machine learning methods (Naive Bayes, K-Nearest Neighbor, Decision Tree, Random Forest, and TensorFlow Decision Forest) combined with features picked from the retrieval of Android permissions to categorize applications as harmful or benign is investigated in this study. The test set consists of 1,168 samples (among these android applications, 602 are malware and 566 are benign applications), each consisting of 948 features (permissions). Using the permission-based dataset, the machine learning algorithms then produce accuracy rates above 80%, except the Naive Bayes Algorithm with 65% accuracy. Of the considered algorithms TensorFlow Decision Forest performed the best with an accuracy of 90%.

Keywords: android malware detection, machine learning, malware, malware analysis

Procedia PDF Downloads 170
23779 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 122
23778 Random Forest Classification for Population Segmentation

Authors: Regina Chua

Abstract:

To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.

Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling

Procedia PDF Downloads 95
23777 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 200
23776 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.

Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods

Procedia PDF Downloads 36
23775 Constitutive Modeling of Different Types of Concrete under Uniaxial Compression

Authors: Mostafa Jafarian Abyaneh, Khashayar Jafari, Vahab Toufigh

Abstract:

The cost of experiments on different types of concrete has raised the demand for prediction of their behavior with numerical analysis. In this research, an advanced numerical model has been presented to predict the complete elastic-plastic behavior of polymer concrete (PC), high-strength concrete (HSC), high performance concrete (HPC) along with different steel fiber contents under uniaxial compression. The accuracy of the numerical response was satisfactory as compared to other conventional simple models such as Mohr-Coulomb and Drucker-Prager. In order to predict the complete elastic-plastic behavior of specimens including softening behavior, disturbed state concept (DSC) was implemented by nonlinear finite element analysis (NFEA) and hierarchical single surface (HISS) failure criterion, which is a failure surface without any singularity.

Keywords: disturbed state concept (DSC), hierarchical single surface (HISS) failure criterion, high performance concrete (HPC), high-strength concrete (HSC), nonlinear finite element analysis (NFEA), polymer concrete (PC), steel fibers, uniaxial compression test

Procedia PDF Downloads 312
23774 Influence of Agroforestry Trees Leafy Biomass and Nitrogen Fertilizer on Crop Growth Rate and Relative Growth Rate of Maize

Authors: A. B. Alarape, O. D. Aba

Abstract:

The use of legume tree pruning as mulch in agroforestry system is a common practice to maintain soil organic matter and improve soil fertility in the tropics. The study was conducted to determine the influence of agroforestry trees leafy biomass and nitrogen fertilizer on crop growth rate and relative growth rate of maize. The experiments were laid out as 3 x 4 x 2 factorial in a split-split plot design with three replicates. Control, biomass species (Parkia biglobosa and Albizia lebbeck) as main plots were considered, rates of nitrogen considered include (0, 40, 80, 120 kg N ha⁻¹) as sub-plots, and maize varieties (DMR-ESR-7 and 2009 EVAT) were used as sub-sub plots. Data were analyzed using descriptive and inferential statistics (ANOVA) at α = 0.05. Incorporation of leafy biomass was significant in 2015 on Relative Growth Rate (RGR), while nitrogen application was significant on Crop Growth Rate (CGR). 2009 EVAT had higher CGR in 2015 at 4-6 and 6-8 WAP. Incorporation of Albizia leaves enhanced the growth of maize than Parkia leaves. Farmers are, therefore, encouraged to use Albizia leaves as mulch to enrich their soil for maize production and most especially, in case of availability of inorganic fertilizers. Though, production of maize with biomass and application of 120 kg N ha⁻¹ will bring better growth of maize.

Keywords: agroforestry trees, fertilizer, growth, incorporation, leafy biomass

Procedia PDF Downloads 192
23773 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 312
23772 High Order Block Implicit Multi-Step (Hobim) Methods for the Solution of Stiff Ordinary Differential Equations

Authors: J. P. Chollom, G. M. Kumleng, S. Longwap

Abstract:

The search for higher order A-stable linear multi-step methods has been the interest of many numerical analysts and has been realized through either higher derivatives of the solution or by inserting additional off step points, supper future points and the likes. These methods are suitable for the solution of stiff differential equations which exhibit characteristics that place a severe restriction on the choice of step size. It becomes necessary that only methods with large regions of absolute stability remain suitable for such equations. In this paper, high order block implicit multi-step methods of the hybrid form up to order twelve have been constructed using the multi-step collocation approach by inserting one or more off step points in the multi-step method. The accuracy and stability properties of the new methods are investigated and are shown to yield A-stable methods, a property desirable of methods suitable for the solution of stiff ODE’s. The new High Order Block Implicit Multistep methods used as block integrators are tested on stiff differential systems and the results reveal that the new methods are efficient and compete favourably with the state of the art Matlab ode23 code.

Keywords: block linear multistep methods, high order, implicit, stiff differential equations

Procedia PDF Downloads 358
23771 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 309
23770 Transfer of Constraints or Constraints on Transfer? Syntactic Islands in Danish L2 English

Authors: Anne Mette Nyvad, Ken Ramshøj Christensen

Abstract:

In the syntax literature, it has standardly been assumed that relative clauses and complement wh-clauses are islands for extraction in English, and that constraints on extraction from syntactic islands are universal. However, the Mainland Scandinavian languages has been known to provide counterexamples. Previous research on Danish has shown that neither relative clauses nor embedded questions are strong islands in Danish. Instead, extraction from this type of syntactic environment is degraded due to structural complexity and it interacts with nonstructural factors such as the frequency of occurrence of the matrix verb, the possibility of temporary misanalysis leading to semantic incongruity and exposure over time. We argue that these facts can be accounted for with parametric variation in the availability of CP-recursion, resulting in the patterns observed, as Danish would then “suspend” the ban on movement out of relative clauses and embedded questions. Given that Danish does not seem to adhere to allegedly universal syntactic constraints, such as the Complex NP Constraint and the Wh-Island Constraint, what happens in L2 English? We present results from a study investigating how native Danish speakers judge extractions from island structures in L2 English. Our findings suggest that Danes transfer their native language parameter setting when asked to judge island constructions in English. This is compatible with the Full Transfer Full Access Hypothesis, as the latter predicts that Danish would have difficulties resetting their [+/- CP-recursion] parameter in English because they are not exposed to negative evidence.

Keywords: syntax, islands, second language acquisition, danish

Procedia PDF Downloads 127
23769 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 134
23768 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: intelligent monitoring, ultra-wideband technology, real-time location, IoT devices, smart healthcare

Procedia PDF Downloads 141
23767 High Titer Cellulosic Ethanol Production Achieved by Fed-Batch Prehydrolysis Simultaneous Enzymatic Saccharification and Fermentation of Sulfite Pretreated Softwood

Authors: Chengyu Dong, Shao-Yuan Leu

Abstract:

Cellulosic ethanol production from lignocellulosic biomass can reduce our reliance on fossil fuel, mitigate climate change, and stimulate rural economic development. The relative low ethanol production (60 g/L) limits the economic viable of lignocellulose-based biorefinery. The ethanol production can be increased up to 80 g/L by removing nearly all the non-cellulosic materials, while the capital of the pretreatment process increased significantly. In this study, a fed-batch prehydrolysis simultaneously saccharification and fermentation process (PSSF) was designed to converse the sulfite pretreated softwood (~30% residual lignin) to high concentrations of ethanol (80 g/L). The liquefaction time of hydrolysis process was shortened down to 24 h by employing the fed-batch strategy. Washing out the spent liquor with water could eliminate the inhibition of the pretreatment spent liquor. However, the ethanol yield of lignocellulose was reduced as the fermentable sugars were also lost during the process. Fed-batch prehydrolyzing the while slurry (i.e. liquid plus solid fraction) pretreated softwood for 24 h followed by simultaneously saccharification and fermentation process at 28 °C can generate 80 g/L ethanol production. Fed-batch strategy is very effectively to eliminate the “solid effect” of the high gravity saccharification, so concentrating the cellulose to nearly 90% by the pretreatment process is not a necessary step to get high ethanol production. Detoxification of the pretreatment spent liquor caused the loss of sugar and reduced the ethanol yield consequently. The tolerance of yeast to inhibitors was better at 28 °C, therefore, reducing the temperature of the following fermentation process is a simple and valid method to produce high ethanol production.

Keywords: cellulosic ethanol, sulfite pretreatment, Fed batch PSSF, temperature

Procedia PDF Downloads 367
23766 Subsidiary Strategy and Importance of Standards: Re-Interpreting the Integration-Responsiveness Framework

Authors: Jo-Ann Müller

Abstract:

The integration-responsiveness (IR) framework presents four distinct internationalization strategies which differ depending on the extent of pressure the company faces for local responsiveness and global integration. This study applies the framework to standards by examining differences in the relative importance of three types of standards depending on the role the subsidiary plays within the corporate group. Hypotheses are tested empirically in a two-stage procedure. First, the subsidiaries are grouped performing cluster analysis. In the second step, the relationship between cluster affiliation and subsidiary strategy is tested using multinomial Probit estimation. While the level of local responsiveness of a firm relates to the relative importance of national and international formal standards, the degree of vertical integration is associated with the application of internal company.

Keywords: FDI, firm-level data, standards, subsidiary strategy

Procedia PDF Downloads 287
23765 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 157
23764 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning

Authors: Ofira Zloto, Ofir Fogel, Eyal Klang

Abstract:

Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.

Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques

Procedia PDF Downloads 4
23763 Impact of a Virtual Reality-Training on Real-World Hockey Skill: An Intervention Trial

Authors: Matthew Buns

Abstract:

Training specificity is imperative for successful performance of the elite athlete. Virtual reality (VR) has been successfully applied to a broad range of training domains. However, to date there is little research investigating the use of VR for sport training. The purpose of this study was to address the question of whether virtual reality (VR) training can improve real world hockey shooting performance. Twenty four volunteers were recruited and randomly selected to complete the virtual training intervention or enter a control group with no training. Four primary types of data were collected: 1) participant’s experience with video games and hockey, 2) participant’s motivation toward video game use, 3) participants technical performance on real-world hockey, and 4) participant’s technical performance in virtual hockey. One-way multivariate analysis of variance (ANOVA) indicated that that the intervention group demonstrated significantly more real-world hockey accuracy [F(1,24) =15.43, p <.01, E.S. = 0.56] while shooting on goal than their control group counterparts [intervention M accuracy = 54.17%, SD=12.38, control M accuracy = 46.76%, SD=13.45]. One-way multivariate analysis of variance (MANOVA) repeated measures indicated significantly higher outcome scores on real-world accuracy (35.42% versus 54.17%; ES = 1.52) and velocity (51.10 mph versus 65.50 mph; ES=0.86) of hockey shooting on goal. This research supports the idea that virtual training is an effective tool for increasing real-world hockey skill.

Keywords: virtual training, hockey skills, video game, esports

Procedia PDF Downloads 147
23762 Weed Classification Using a Two-Dimensional Deep Convolutional Neural Network

Authors: Muhammad Ali Sarwar, Muhammad Farooq, Nayab Hassan, Hammad Hassan

Abstract:

Pakistan is highly recognized for its agriculture and is well known for producing substantial amounts of wheat, cotton, and sugarcane. However, some factors contribute to a decline in crop quality and a reduction in overall output. One of the main factors contributing to this decline is the presence of weed and its late detection. This process of detection is manual and demands a detailed inspection to be done by the farmer itself. But by the time detection of weed, the farmer will be able to save its cost and can increase the overall production. The focus of this research is to identify and classify the four main types of weeds (Small-Flowered Cranesbill, Chick Weed, Prickly Acacia, and Black-Grass) that are prevalent in our region’s major crops. In this work, we implemented three different deep learning techniques: YOLO-v5, Inception-v3, and Deep CNN on the same Dataset, and have concluded that deep convolutions neural network performed better with an accuracy of 97.45% for such classification. In relative to the state of the art, our proposed approach yields 2% better results. We devised the architecture in an efficient way such that it can be used in real-time.

Keywords: deep convolution networks, Yolo, machine learning, agriculture

Procedia PDF Downloads 119
23761 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection

Procedia PDF Downloads 307
23760 Chromosomes Are Present in a Fixed Region on the Equatorial Plate Within the Interphase of Cell Division

Authors: Chunxiao Wu, Dongyun Jiang, Tao Jiang, Luxia Xu, Qian Xu, Meng Zhao, Qin Zhu, Zhigang Guo, Jinlan Pan, Suning Chen

Abstract:

The stability and evolution of human genetics depends on chromosomes (and chromosome-chromosome interactions). We wish to understand the spatial location of chromosomes in dividing cells in order to understand the relationship between chromosome-chromosome interactions and to further investigate the role of chromosomes and their impact on cell biological behavior. In this study, we explored the relative spatial positional relationships of chromosomes [t (9;22) and t (15;17)] in B-ALL cells by using the three-dimensions DNA in situ fluorescent hybridization (3D-FISH) method. The results showed that chromosomes [t (9;22) and t (15;17)] showed relatively stable spatial relationships. The relative stability of the spatial location of chromosomes in dividing cells may be relevant to disease.

Keywords: chromosome, human genetics, chromosome territory, 3D-FISH

Procedia PDF Downloads 49
23759 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies

Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi

Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)

Procedia PDF Downloads 213