Search results for: automatic target recognition
4141 Performances of the Double-Crystal Setup at CERN SPS Accelerator for Physics beyond Colliders Experiments
Authors: Andrii Natochii
Abstract:
We are currently presenting the recent results from the CERN accelerator facilities obtained in the frame of the UA9 Collaboration. The UA9 experiment investigates how a tiny silicon bent crystal (few millimeters long) can be used for various high-energy physics applications. Due to the huge electrostatic field (tens of GV/cm) between crystalline planes, there is a probability for charged particles, impinging the crystal, to be trapped in the channeling regime. It gives a possibility to steer a high intensity and momentum beam by bending the crystal: channeled particles will follow the crystal curvature and deflect on the certain angle (from tens microradians for LHC to few milliradians for SPS energy ranges). The measurements at SPS, performed in 2017 and 2018, confirmed that the protons deflected by the first crystal, inserted in the primary beam halo, can be caught and channeled by the second crystal. In this configuration, we measure the single pass deflection efficiency of the second crystal and prove our opportunity to perform the fixed target experiment at SPS accelerator (LHC in the future).Keywords: channeling, double-crystal setup, fixed target experiment, Timepix detector
Procedia PDF Downloads 1514140 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs
Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare
Abstract:
The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio
Procedia PDF Downloads 984139 Electrochemical Modification of Boron Doped Carbon Nanowall Electrodes for Biosensing Purposes
Authors: M. Kowalski, M. Brodowski, K. Dziabowska, E. Czaczyk, W. Bialobrzeska, N. Malinowska, S. Zoledowska, R. Bogdanowicz, D. Nidzworski
Abstract:
Boron-doped-carbon nanowall (BCNW) electrodes are recently in much interest among scientists. BCNWs are good candidates for biosensor purposes as they possess interesting electrochemical characteristics like a wide potential range and the low difference between redox peaks. Moreover, from technical parameters, they are mechanically resistant and very tough. The production process of the microwave plasma-enhanced chemical vapor deposition (MPECVD) allows boron to build into the structure of the diamond being formed. The effect is the formation of flat, long structures with sharp ends. The potential of these electrodes was checked in the biosensing field. The procedure of simple carbon electrodes modification by antibodies was adopted to BCNW for specific antigen recognition. Surface protein D deriving from H. influenzae pathogenic bacteria was chosen as a target analyte. The electrode was first modified with the aminobenzoic acid diazonium salt by electrografting (electrochemical reduction), next anti-protein D antibodies were linked via 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride/N-hydroxysuccinimide (EDC/NHS) chemistry, and free sites were blocked by BSA. Cyclic voltammetry measurements confirmed the proper electrode modification. Electrochemical impedance spectroscopy records indicated protein detection. The sensor was proven to detect protein D in femtograms. This work was supported by the National Centre for Research and Development (NCBR) TECHMATSTRATEG 1/347324/12/NCBR/ 2017.Keywords: anti-protein D antibodies, boron-doped carbon nanowall, impedance spectroscopy, Haemophilus influenzae.
Procedia PDF Downloads 1734138 Next Generation Sequencing Analysis of Circulating MiRNAs in Rheumatoid Arthritis and Osteoarthritis
Authors: Khalda Amr, Noha Eltaweel, Sherif Ismail, Hala Raslan
Abstract:
Introduction: Osteoarthritis is the most common form of arthritis that involves the wearing away of the cartilage that caps the bones in the joints. While rheumatoid arthritis is an autoimmune disease in which the immune system attacks the joints, beginning with the lining of joints. In this study, we aimed to study the top deregulated miRNAs that might be the cause of pathogenesis in both diseases. Methods: Eight cases were recruited in this study: 4 rheumatoid arthritis (RA), 2 osteoarthritis (OA) patients, as well as 2 healthy controls. Total RNA was isolated from plasma to be subjected to miRNA profiling by NGS. Sequencing libraries were constructed and generated using the NEBNextR UltraTM small RNA Sample Prep Kit for Illumina R (NEB, USA), according to the manufacturer’s instructions. The quality of samples were checked using fastqc and multiQC. Results were compared RA vs Controls and OA vs. Controls. Target gene prediction and functional annotation of the deregulated miRNAs were done using Mienturnet. The top deregulated miRNAs in each disease were selected for further validation using qRT-PCR. Results: The average number of sequencing reads per sample exceeded 2.2 million, of which approximately 57% were mapped to the human reference genome. The top DEMs in RA vs controls were miR-6724-5p, miR-1469, miR-194-3p (up), miR-1468-5p, miR-486-3p (down). In comparison, the top DEMs in OA vs controls were miR-1908-3p, miR-122b-3p, miR-3960 (up), miR-1468-5p, miR-15b-3p (down). The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Six of the deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis. Conclusion: Six of our studied deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) might be highly involved in the disease pathogenesis. Further functional studies are crucial to assess their functions and actual target genes.Keywords: next generation sequencing, mirnas, rheumatoid arthritis, osteoarthritis
Procedia PDF Downloads 974137 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization
Procedia PDF Downloads 1904136 Effectiveness of a Peer-Mediated Intervention on Writing Skills in Students with Autism Spectrum Disorder in the Inclusive Classroom
Authors: Siddiq Ahmed
Abstract:
The current study aimed to investigate the effectiveness of a Peer-Mediated Intervention (PMI) on writing skills for a student with autism spectrum disorders in inclusive classrooms. The participants in this study were two students, one as a tutor and another as a tutee who was diagnosed with autism spectrum disorder (ASD). The target participant struggled with writing skills and was paired with a student with high academic outcomes. The Tutor had a readiness to act as a tutor for his peer and was trained on how to assist his peer and how to identify and guide his peer’s writing mistakes. Multiple baseline design across behaviors was implemented to monitor the student’s progress in writing skills. The results of the present study showed that PMI yielded significant improvements in academic achievements for the target student. This study suggests that further studies should replicate the current study with an intensive focus on other academic skills such as reading comprehension, writing social stories, and math.Keywords: peer tutoring, writing skills, autism, inclusion
Procedia PDF Downloads 1084135 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.Keywords: clustering, unsupervised learning, pattern recognition, categorical datasets, knowledge discovery, k-means
Procedia PDF Downloads 2594134 Investigation of Interlayer Shear Effects in Asphalt Overlay on Existing Rigid Airfield Pavement Using Digital Image Correlation
Authors: Yuechao Lei, Lei Zhang
Abstract:
The interface shear between asphalt overlay and existing rigid airport pavements occurs due to differences in the mechanical properties of materials subjected to aircraft loading. Interlayer contact influences the mechanical characteristics of the asphalt overlay directly. However, the effective interlayer relative displacement obtained accurately using existing displacement sensors of the loading apparatus remains challenging. This study aims to utilize digital image correlation technology to enhance the accuracy of interfacial contact parameters by obtaining effective interlayer relative displacements. Composite structure specimens were prepared, and fixtures for interlayer shear tests were designed and fabricated. Subsequently, a digital image recognition scheme for required markers was designed and optimized. Effective interlayer relative displacement values were obtained through image recognition and calculation of surface markers on specimens. Finite element simulations validated the mechanical response of composite specimens with interlayer shearing. Results indicated that an optimized marking approach using the wall mending agent for surface application and color coding enhanced the image recognition quality of marking points on the specimen surface. Further image extraction provided effective interlayer relative displacement values during interlayer shear, thereby improving the accuracy of interface contact parameters. For composite structure specimens utilizing Styrene-Butadiene-Styrene (SBS) modified asphalt as the tack coat, the corresponding maximum interlayer shear stress strength was 0.6 MPa, and fracture energy was 2917 J/m2. This research provides valuable insights for investigating the impact of interlayer contact in composite pavement structures on the mechanical characteristics of asphalt overlay.Keywords: interlayer contact, effective relative displacement, digital image correlation technology, composite pavement structure, asphalt overlay
Procedia PDF Downloads 484133 Design and Implementation of a Bluetooth-Based Misplaced Object Finder Using DFRobot Arduino Interfaced with Led and Buzzer
Authors: Bright Emeni
Abstract:
The project is a system that allows users to locate their misplaced or lost devices by using Bluetooth technology. It utilizes the DFRobot Bettle BLE Arduino microcontroller as its main component for communication and control. By interfacing it with an LED and a buzzer, the system provides visual and auditory signals to assist in locating the target device. The search process can be initiated through an Android application, by which the system creates a Bluetooth connection between the microcontroller and the target device, permitting the exchange of signals for tracking purposes. When the device is within range, the LED indicator illuminates, and the buzzer produces audible alerts, guiding the user to the device's location. The application also provides an estimated distance of the object using Bluetooth signal strength. The project’s goal is to offer a practical and efficient solution for finding misplaced devices, leveraging the capabilities of Bluetooth technology and microcontroller-based control systems.Keywords: Bluetooth finder, object finder, Bluetooth tracking, tracker
Procedia PDF Downloads 654132 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation
Authors: A. Bensaid, T. Mostephaoui, R. Nedjai
Abstract:
A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.Keywords: land development, GIS, segmentation, remote sensing
Procedia PDF Downloads 1554131 Decoding the Construction of Identity and Struggle for Self-Assertion in Toni Morrison and Selected Indian Authors
Authors: Madhuri Goswami
Abstract:
The matrix of power establishes the hegemonic dominance and supremacy of one group through exercising repression and relegation upon the other. However, the injustice done to any race, ethnicity, or caste has instigated the protest and resistance through various modes -social campaigns, political movements, literary expression and so on. Consequently, the search for identity, the means of claiming it and strive for recognition have evolved as the persistent phenomena all through the world. In the discourse of protest and minority literature, these two discourses -African American and Indian Dalit- surprisingly, share wrath and anger, hope and aspiration, and quest for identity and struggle for self-assertion. African American and Indian Dalit are two geographically and culturally apart communities that stand together on a single platform. This paper has sought to comprehend the form and investigate the formation of identity in general and in the literary work of Toni Morrison and Indian Dalit writing, particular, i.e., Black identity and Dalit identity. The study has speculated two types of identity, namely, individual or self and social or collective identity in the literary province of these marginalized literature. Morrison’s work outsources that self-identity is not merely a reflection of an inner essence; it is constructed through social circumstances and relations. Likewise, Dalit writings too have a fair record of discovery of self-hood and formation of identity, which connects to the realization of self-assertion and worthiness of their culture among Dalit writers. Bama, Pawar, Limbale, Pawde, and Kamble investigate their true self concealed amid societal alienation. The study has found that the struggle for recognition is, in fact, the striving to become the definer, instead of just being defined; and, this striving eventually, leads to the introspection among them. To conclude, Morrison as well as Indian marginalized authors, despite being set quite distant, communicate the relation between individual and community in the context of self-consciousness, self-identification and (self) introspection. This research opens a scope for further research to find out similar phenomena and trace an analogy in other world literatures.Keywords: identity, introspection, self-access, struggle for recognition
Procedia PDF Downloads 1544130 Back Extraction and Isolation of Alkaloids from Ionic Liquid-Based Extracts
Authors: Rozalina Keremedchieva, Ivan Svinyarov, Milen G. Bogdanov
Abstract:
In continuation of a research project on the application of ionic liquids (ILs) as an alternative to the conventional organic solvents used in the recovery of value added chemicals of industrial interest1-3 we developed a procedure for back extraction and isolation in pure form of the biologically active alkaloid glaucine from IL-based aqueous solutions. One of the approaches applied was the formation of two-phase systems (IL-ATPS) by the addition of kosmotropic salts to the plant extract. The ability of the salts (Na2CO3, MgSO4, (NH4)2SO4, NaH2PO4) to induce the formation of two-phase systems and the influence of pH value on the partition coefficients of glaucine was comprehensively studied. As a result, it was found that the target alkaloid is preferably partitioned into the IL-rich phase regardless of the pH value of the medium and thus shows the inapplicability of the approach used for the isolation of the target compound from the ionic liquid. However, the results obtained can be used as a platform for the development of an analytical method for the quantitative determination of low concentrations of glaucine in biological samples. We further examined the ability of a series of organic solvents such as diethyl ether, Tert-butylmethyl ether, ethyl acetate, butyl acetate, toluene, chloroform, dichloromethane to recover glaucine form raw IL-based aqueous extracts. Optimal conditions for quantitative extraction of glaucine into chloroform were found from which, after removal of the solvent and subsequent recrystallization from ethanol, the target compound was isolated in a high purity as a hydrobromide salt – The form in which it entrance as an active ingredient in various medicines.Keywords: natural products, ionic liquids, solid-liquid extraction, liquid-liquid extraction
Procedia PDF Downloads 4774129 Development of an Erodable Matrix Drug Delivery Platform for Controled Delivery of Non Steroidal Anti Inflamatory Drugs Using Melt Granulation Process
Authors: A. Hilsana, Vinay U. Rao, M. Sudhakar
Abstract:
Even though a number of non-steroidal anti-inflammatory drugs (NSAIDS) are available with different chemistries, they share a common solubility characteristic that is they are relatively more soluble in alkaline environment and practically insoluble in acidic environment. This work deals with developing a wax matrix drug delivery platform for controlled delivery of three model NSAIDS, Diclofenac sodium (DNa), Mefenamic acid (MA) and Naproxen (NPX) using the melt granulation technique. The aim of developing the platform was to have a general understanding on how an erodible matrix system modulates drug delivery rate and extent and how it can be optimized to give a delivery system which shall release the drug as per a common target product profile (TPP). Commonly used waxes like Cetostearyl alcohol and stearic acid were used singly an in combination to achieve a TPP of not 15 to 35% in 1 hour and not less than 80% Q in 24 hours. Full factorial design of experiments was followed for optimization of the formulation.Keywords: NSAIDs, controlled delivery, target product profile, melt granulation
Procedia PDF Downloads 3344128 Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning
Authors: Muhammad Maisam Abbas, Salman Tariq, Usama Riaz, Muhammad Tanveer, Humaira Abdul Ghafoor
Abstract:
Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system.Keywords: autism spectrum disorder, ASD, EigenFace, facial expression recognition, FisherFace, local binary pattern histogram, LBPH
Procedia PDF Downloads 1744127 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method
Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter
Abstract:
This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.Keywords: aging, eye tracking, implicit learning, visual statistical learning
Procedia PDF Downloads 774126 Visual Detection of Escherichia coli (E. coli) through Formation of Beads Aggregation in Capillary Tube by Rolling Circle Amplification
Authors: Bo Ram Choi, Ji Su Kim, Juyeon Cho, Hyukjin Lee
Abstract:
Food contaminated by bacteria (E.coli), causes food poisoning, which occurs to many patients worldwide annually. We have introduced an application of rolling circle amplification (RCA) as a versatile biosensor and developed a diagnostic platform composed of capillary tube and microbeads for rapid and easy detection of Escherichia coli (E. coli). When specific mRNA of E.coli is extracted from cell lysis, rolling circle amplification (RCA) of DNA template can be achieved and can be visualized by beads aggregation in capillary tube. In contrast, if there is no bacterial pathogen in sample, no beads aggregation can be seen. This assay is possible to detect visually target gene without specific equipment. It is likely to the development of a genetic kit for point of care testing (POCT) that can detect target gene using microbeads.Keywords: rolling circle amplification (RCA), Escherichia coli (E. coli), point of care testing (POCT), beads aggregation, capillary tube
Procedia PDF Downloads 3654125 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 144124 Meaningful Habit for EFL Learners
Authors: Ana Maghfiroh
Abstract:
Learning a foreign language needs a big effort from the learner itself to make their language ability grows better day by day. Among those, they also need a support from all around them including teacher, friends, as well as activities which support them to speak the language. When those activities developed well as a habit which are done regularly, it will help improving the students’ language competence. It was a qualitative research which aimed to find out and describe some activities implemented in Pesantren Al Mawaddah, Ponorogo, in order to teach the students a foreign language. In collecting the data, the researcher used interview, questionnaire, and documentation. From the study, it was found that Pesantren Al Mawaddah had successfully built the language habit on the students to speak the target language. More than 15 hours a day students were compelled to speak foreign language, Arabic or English, in turn. It aimed to habituate the students to keep in touch with the target language. The habit was developed through daily language activities, such as dawn vocabs giving, dictionary handling, daily language use, speech training and language intensive course, daily language input, and night vocabs memorizing. That habit then developed the students awareness towards the language learned as well as promoted their language mastery.Keywords: habit, communicative competence, daily language activities, Pesantren
Procedia PDF Downloads 5394123 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI
Authors: Ananya Ananya, Karthik Rao
Abstract:
Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net
Procedia PDF Downloads 2614122 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 2804121 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 1154120 Identifying the Structural Components of Old Buildings from Floor Plans
Authors: Shi-Yu Xu
Abstract:
The top three risk factors that have contributed to building collapses during past earthquake events in Taiwan are: "irregular floor plans or elevations," "insufficient columns in single-bay buildings," and the "weak-story problem." Fortunately, these unsound structural characteristics can be directly identified from the floor plans. However, due to the vast number of old buildings, conducting manual inspections to identify these compromised structural features in all existing structures would be time-consuming and prone to human errors. This study aims to develop an algorithm that utilizes artificial intelligence techniques to automatically pinpoint the structural components within a building's floor plans. The obtained spatial information will be utilized to construct a digital structural model of the building. This information, particularly regarding the distribution of columns in the floor plan, can then be used to conduct preliminary seismic assessments of the building. The study employs various image processing and pattern recognition techniques to enhance detection efficiency and accuracy. The study enables a large-scale evaluation of structural vulnerability for numerous old buildings, providing ample time to arrange for structural retrofitting in those buildings that are at risk of significant damage or collapse during earthquakes.Keywords: structural vulnerability detection, object recognition, seismic capacity assessment, old buildings, artificial intelligence
Procedia PDF Downloads 894119 Integrated Target Tracking and Control for Automated Car-Following of Truck Platforms
Authors: Fadwa Alaskar, Fang-Chieh Chou, Carlos Flores, Xiao-Yun Lu, Alexandre M. Bayen
Abstract:
This article proposes a perception model for enhancing the accuracy and stability of car-following control of a longitudinally automated truck. We applied a fusion-based tracking algorithm on measurements of a single preceding vehicle needed for car-following control. This algorithm fuses two types of data, radar and LiDAR data, to obtain more accurate and robust longitudinal perception of the subject vehicle in various weather conditions. The filter’s resulting signals are fed to the gap control algorithm at every tracking loop composed by a high-level gap control and lower acceleration tracking system. Several highway tests have been performed with two trucks. The tests show accurate and fast tracking of the target, which impacts on the gap control loop positively. The experiments also show the fulfilment of control design requirements, such as fast speed variations tracking and robust time gap following.Keywords: object tracking, perception, sensor fusion, adaptive cruise control, cooperative adaptive cruise control
Procedia PDF Downloads 2294118 Co-Design of Accessible Speech Recognition for Users with Dysarthric Speech
Authors: Elizabeth Howarth, Dawn Green, Sean Connolly, Geena Vabulas, Sara Smolley
Abstract:
Through the EU Horizon 2020 Nuvoic Project, the project team recruited 70 individuals in the UK and Ireland to test the Voiceitt speech recognition app and provide user feedback to developers. The app is designed for people with dysarthric speech, to support communication with unfamiliar people and access to speech-driven technologies such as smart home equipment and smart assistants. Participants with atypical speech, due to a range of conditions such as cerebral palsy, acquired brain injury, Down syndrome, stroke and hearing impairment, were recruited, primarily through organisations supporting disabled people. Most had physical or learning disabilities in addition to dysarthric speech. The project team worked with individuals, their families and local support teams, to provide access to the app, including through additional assistive technologies where needed. Testing was user-led, with participants asked to identify and test use cases most relevant to their daily lives over a period of three months or more. Ongoing technical support and training were provided remotely and in-person throughout the testing period. Structured interviews were used to collect feedback on users' experiences, with delivery adapted to individuals' needs and preferences. Informal feedback was collected through ongoing contact between participants, their families and support teams and the project team. Focus groups were held to collect feedback on specific design proposals. User feedback shared with developers has led to improvements to the user interface and functionality, including faster voice training, simplified navigation, the introduction of gamification elements and of switch access as an alternative to touchscreen access, with other feature requests from users still in development. This work offers a case-study in successful and inclusive co-design with the disabled community.Keywords: co-design, assistive technology, dysarthria, inclusive speech recognition
Procedia PDF Downloads 1104117 Student's Reluctance in Oral Participation
Authors: Soumia Hebbri
Abstract:
English language has become a major medium for communication across borders. Nowadays, it is seen as a communicative medium not only for business but also for academic purposes. Some scientists describe English language as a way to enjoy an admired position in many countries. It is neither a national nor an official language in North Africa; it is considered as the most widely taught foreign language at the educational system. In order to achieve mastery of a foreign language, learners must develop the four principal language skills: Reading, writing, listening and speaking. However, being able to interact orally with others, using effectively the target language, is nowadays very important. People who cannot speak a foreign language cannot be considered effective language users, even if they can read and understand it. The teachers’ role in promoting foreign language acquisition is very important, as they are responsible for providing students appropriate contexts to foster communicative situations that allow students to express themselves and interact in the target language. So, we should understand the student’s reasons of their reluctance in oral participation when dealing with oral communicative tasks, in order to get insights about the possible motivating factors that may improve their involvement and participation in the classroom.Keywords: EL, EFL, ET, TEFL, communication
Procedia PDF Downloads 5034116 Valorization Cascade Approach of Fish By-Products towards a Zero-Waste Future: A Review
Authors: Joana Carvalho, Margarida Soares, André Ribeiro, Lucas Nascimento, Nádia Valério, Zlatina Genisheva
Abstract:
Following the exponential growth of human population, a remarkable increase in the amount of fish waste has been produced worldwide. The fish processing industry generates a considerable amount of by-products which represents a considerable environmental problem. Accordingly, the reuse and valorisation of these by-products is a key process for marine resource preservation. The significant volume of fish waste produced worldwide, along with its environmental impact, underscores the urgent need for the adoption of sustainable practices. The transformative potential of utilizing fish processing waste to create industrial value is gaining recognition. The substantial amounts of waste generated by the fish processing industry present both environmental challenges and economic inefficiencies. Different added-value products can be recovered by the valorisation industries, whereas fishing companies can save costs associated with the management of those wastes, with associated advantages, not only in terms of economic income but also considering the environmental impacts. Fish processing by-products have numerous applications; the target portfolio of products will be fish oil, fish protein hydrolysates, bacteriocins, pigments, vitamins, collagen, and calcium-rich powder, targeting food products, additives, supplements, and nutraceuticals. This literature review focuses on the main valorisation ways of fish wastes and different compounds with a high commercial value obtained by fish by-products and their possible applications in different fields. Highlighting its potential in sustainable resource management strategies can play and important role in reshaping the fish processing industry, driving it towards circular economy and consequently more sustainable future.Keywords: fish process industry, fish wastes, by-products, circular economy, sustainability
Procedia PDF Downloads 174115 Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing
Authors: McClain Thiel
Abstract:
Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions.Keywords: monocular distancing, computer vision, facial analysis, 3D localization
Procedia PDF Downloads 1394114 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals
Authors: Naser Safdarian, Nader Jafarnia Dabanloo
Abstract:
In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition
Procedia PDF Downloads 4564113 The Recognition of Exclusive Choice of Court Agreements: United Arab Emirates Perspective and the 2005 Hague Convention on Choice of Court Agreements
Authors: Hasan Alrashid
Abstract:
The 2005 Hague Convention seeks to ensure legal certainty and predictability between parties in international business transactions. It harmonies exclusive choice of court agreements at the international level between parties to commercial transactions and to govern the recognition and enforcement of judgments resulting from proceedings based on such agreements to promote international trade and investment. Although the choice of court agreements is significant in international business transactions, the United Arab Emirates refuse to recognise it by Article 24 of the Federal Law No. 11 of 1992 of the Civil Procedure Code. A review of judicial judgments in United Arab Emirates up to the present day has revealed that several cases appeared before the Court in different states of United Arab Emirates regarding the recognition of exclusive choice of court agreements. In all the cases, the courts regarded the exclusive choice of court agreements as a direct assault on state authority and sovereignty and refused categorically to recognize choice of court agreements by refusing to stay proceedings in favor of the foreign chosen court. This has created uncertainty and unpredictability in international business transaction in the United Arab Emirates. In June 2011, the first Gulf Judicial Seminar on Cross-Frontier Legal Cooperation in Civil and Commercial Matters was held in Doha, Qatar. The Permanent Bureau of the Hague Conference attended the conference and invited the states of the Gulf Cooperation Council (GCC) namely, The United Arab Emirates, Bahrain, Saudi Arabia, Oman, Qatar and Kuwait to adopt some of the Hague Conventions, one of which was the Hague Convention on Choice of Court Agreements. One of the recommendations of the conference was that the GCC states should research ‘the benefits of predictability and legal certainty provided by the 2005 Convention on Choice of Court Agreements and its resulting advantages for cross-border trade and investment’ for possible adoption of the Hague Convention. Up to today, no further step has been taken by the any of the GCC states to adapt the Hague Convention nor did they conduct research on the benefits of predictability and legal certainty in international business transactions. This paper will argue that the approach regarding the recognition of choice of court agreements in United Arab Emirates states can be improved in order to help the parties in international business transactions avoid parallel litigation and ensure legal certainty and predictability. The focus will be the uncertainty and gaps regarding the choice of court agreements in the United Arab Emirates states. The Hague Convention on choice of court agreements and the importance of harmonisation of the rules of choice of court agreements at international level will also be discussed. Finally, The feasibility and desirability of recognizing choice of court agreements in United Arab Emirates legal system by becoming a party to the Hague Convention will be evaluated.Keywords: choice of court agreements, party autonomy, public authority, sovereignty
Procedia PDF Downloads 2464112 Novel Marketing Strategy To Increase Sales Revenue For SMEs Through Social Media
Authors: Kruti Dave
Abstract:
Social media marketing is an essential component of 21st-century business. Social media platforms enable small and medium-sized businesses to enhance brand recognition, generate leads and sales. However, the research on social media marketing is still fragmented and focuses on specific topics, such as effective communication techniques. Since the various ways in which social media impacts individuals and companies alike, the authors of this article focus on the origin, impacts, and current state of Social Media, emphasizing their significance as customer empowerment agents. It illustrates their potential and current responsibilities as part of the corporate business strategy and also suggests several methods to engage them as marketing tools. The focus of social media marketing ranges from defenders to explorers, the culture of Social media marketing encompasses the poles of conservatism and modernity, social media marketing frameworks lie between hierarchies and networks, and its management goes from autocracy to anarchy. This research proposes an integrative framework for small and medium-sized businesses through social media, and the influence of the same will be measured. This strategy will help industry experts to understand this new era. We propose an axiom: Social Media is always a function of marketing as a revenue generator.Keywords: social media, marketing strategy, media marketing, brand awareness, customer engagement, revenue generator, brand recognition
Procedia PDF Downloads 197