Search results for: classifiers accuracy
955 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures
Authors: Milad Abbasi
Abstract:
Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network
Procedia PDF Downloads 154954 Body-Worn Camera Use in the Emergency Department: Patient and Provider Satisfaction
Authors: Jeffrey Ho, Scott Joing, Paul Nystrom, William Heegaard, Danielle Hart, David Plummer, James Miner
Abstract:
Body-Worn Cameras (BWCs) are used in public safety to record encounters. They are shown to enhance the accuracy of documentation in virtually every situation. They are not widely used in medical encounters in part because of concern for patient acceptance. The goal of this pilot study was to determine if BWC use is acceptable to the patient. This was a prospective, observational study of the AXON Flex BWC (TASER International, Scottsdale, AZ) conducted at an urban, Level 1 Trauma Center Emergency Department (ED). The BWC was worn by Emergency Physicians (EPs) on their shifts during a 30-day period. The BWC was worn at eye-level mounted on a pair of clear safety glasses. Patients seen by the EP were enrolled in the study by a trained research associate. Patients who were <18 years old, who were with other people in the exam room, did not speak English, were critically ill, had chief complaints involving genitalia or sexual assault, were considered to be vulnerable adults, or with an altered mental status were excluded. Consented patients were given a survey after the encounter to determine their perception of the BWC. The questions asked involved the patients’ perceptions of a BWC being present during their interaction with their EP. Data were analyzed with descriptive statistics. There were 417 patients enrolled in the study. 3/417 (0.7%) patients were intimidated by the BWC, 1/417 (0.2%) was nervous because of the BWC, 0/417 (0%) were inhibited from telling the EP certain things because of the BWC, 57/417 (13.7%) patients did not notice the device, and 305/417 (73.1%) patients were had a favorable perception about the BWC being used during their encounter. The use of BWCs appears feasible in the ED, with largely favorable perceptions and acceptance of the device by the patients. Further study is needed to determine the best use and practices of BWCs during ED patient encounters.Keywords: body-worn camera, documentation, patient satisfaction, video
Procedia PDF Downloads 377953 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 252952 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 67951 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia
Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak
Abstract:
In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.Keywords: data security, flow cytometry, leukaemia, telematics platform, telemedicine
Procedia PDF Downloads 987950 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 248949 Investigating Teachers’ Approaches in Teaching English and Students’ Communicative Ability in a Tertiary College
Authors: Adel Ben Mohamed
Abstract:
The widespread use of the English language around the world has pushed many countries to consider such a language as a top priority in their educational system. One of these countries is the Sultanate of Oman. In this frame, the Omani government has allocated huge budgets as well as resources in order to implement the English language in its education system. The importance of English is prevalent in Oman. This is clearly noticeable through remarkable signs. For instance, most of the official documents in Oman are in both Arabic (the mother tongue) or English. In addition to that, there is a mushroom of English language institutes all over the country. In 2020, there are over fourteen English language institutes and centers in Oman (esl base, 2020). Moreover, these days most of the Omani parents are sending their children for tuition to learn the English language. Hence, it is apparent that the Sultanate of Oman is giving a great value to the importance of English in attaining various goals. However, in the world of work, what is more, important today is fluency rather than accuracy. Therefore, many people go for communication English rather than technical English. For example, Oman Daily Observer newspaper published a job advertisement of a sale assistant on 23rd of November 2020, recommended that speaking very well English is a must to be hired for the position (Oman Observer, 2020). In line with this and because of the great importance of the English language in Oman, the ministry of higher education has placed much emphasis on this official foreign language. Therefore, in the Omani educational system, all post -secondary students must sit for one year in one of the higher education institutions as a General Foundation Programmes (GFP) prior to moving to their respective majors in diploma level. Accordingly, the implementation of any teaching approach is determined by different factors: some are directly linked to teachers while others are related to organizational variables.Keywords: teaching approaches, communicative, ability, investigating
Procedia PDF Downloads 93948 The Development of Digital Commerce in Community Enterprise Products to Promote the Distribution of Samut Songkhram Province
Authors: Natcha Wattanaprapa, Alongkorn Taengtong, Phachaya Chaiwchan
Abstract:
This study investigates and promotes the distribution of community enterprise products of Samut Songkhram province by using e-commerce web technology to help distribute the products. This study also aims to develop the information system to be able to operate on multiple platforms and promote the easy usability on smartphones to increase the efficiency and promote the distribution of community enterprise products of Samut Songkhram province in three areas including Baan Saraphi learning center, the learning center of Bang Noi Floating market as well as Bang Nang Li learning center. The main structure consists of spreading the knowledge regarding the tourist attraction in the area of community enterprise, e-commerce system of community enterprise products, and Chatbot. The researcher developed the system into an application form using the software package to create and manage the content on the internet. Connect management system (CMS) word press was used for managing web pages. Add-on CMS word press was used for creating the system of Chatbot, and the database of PHP My Admin was used as the database management system. The evaluation by the experts and users in 5 aspects, including the system efficiency, the accuracy in the operation of the system, the convenience and ease of use of the system, the design, and the promotion of product distribution in Samut Songkhram province by using questionnaires revealed that the result of evaluation in the promotion of product distribution in Samut Songkhram province was the highest with the mean of 4.20. When evaluating the efficiency of the developed system, it was found that the result of system efficiency was the highest level with a mean of 4.10.Keywords: community enterprise, digital commerce, promotion of product distribution, Samut Songkhram province
Procedia PDF Downloads 149947 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method
Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati
Abstract:
An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)
Procedia PDF Downloads 284946 Data-Driven Simulations Tools for Der and Battery Rich Power Grids
Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili
Abstract:
Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools
Procedia PDF Downloads 105945 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements
Authors: Andrey Kupriyanov
Abstract:
In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)
Procedia PDF Downloads 181944 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 83943 Photomicrograph-Based Neuropathology Consultation in Tanzania; The Utility of Static-Image Neurotelepathology in Low- And Middle-Income Countries
Authors: Francis Zerd, Brian E. Moore, Atuganile E. Malango, Patrick W. Hosokawa, Kevin O. Lillehei, Laurence Lemery Mchome, D. Ryan Ormond
Abstract:
Introduction: Since neuropathologic diagnosis in the developing world is hampered by limitations in technical infrastructure, trained laboratory personnel, and subspecialty-trained pathologists, the use of telepathology for diagnostic support, second-opinion consultations, and ongoing training holds promise as a means of addressing these challenges. This research aims to assess the utility of static teleneuropathology in improving neuropathologic diagnoses in low- and middle-income countries. Methods: Consecutive neurosurgical biopsy and resection specimens obtained at Muhimbili National Hospital in Tanzania between July 1, 2018, and June 30, 2019, were selected for retrospective, blinded static-image neuropathologic review followed by on-site review by an expert neuropathologist. Results: A total of 75 neuropathologic cases were reviewed. The agreement of static images and on-site glass diagnosis was 71% with strict criteria and 88% with less stringent criteria. This represents an overall improvement in diagnostic accuracy from 36% by general pathologists to 71% by a neuropathologist using static telepathology (or 76% to 88% with less stringent criteria). Conclusions: Telepathology offers a suitable means of providing diagnostic support, second-opinion consultations, and ongoing training to pathologists practicing in resource-limited countries. Moreover, static digital teleneuropathology is an uncomplicated, cost-effective, and reliable way to achieve these goals.Keywords: neuropathology, resource-limited settings, static image, Tanzania, teleneuropathology
Procedia PDF Downloads 102942 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms
Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann
Abstract:
Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI
Procedia PDF Downloads 182941 Monitor Student Concentration Levels on Online Education Sessions
Authors: M. K. Wijayarathna, S. M. Buddika Harshanath
Abstract:
Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System.Keywords: HTML5, JavaScript, Python flask framework, AI, graphical user
Procedia PDF Downloads 101940 Numerical Simulation of Two-Phase Flows Using a Pressure-Based Solver
Authors: Lei Zhang, Jean-Michel Ghidaglia, Anela Kumbaro
Abstract:
This work focuses on numerical simulation of two-phase flows based on the bi-fluid six-equation model widely used in many industrial areas, such as nuclear power plant safety analysis. A pressure-based numerical method is adopted in our studies due to the fact that in two-phase flows, it is common to have a large range of Mach numbers because of the mixture of liquid and gas, and density-based solvers experience stiffness problems as well as a loss of accuracy when approaching the low Mach number limit. This work extends the semi-implicit pressure solver in the nuclear component CUPID code, where the governing equations are solved on unstructured grids with co-located variables to accommodate complicated geometries. A conservative version of the solver is developed in order to capture exactly the shock in one-phase flows, and is extended to two-phase situations. An inter-facial pressure term is added to the bi-fluid model to make the system hyperbolic and to establish a well-posed mathematical problem that will allow us to obtain convergent solutions with refined meshes. The ability of the numerical method to treat phase appearance and disappearance as well as the behavior of the scheme at low Mach numbers will be demonstrated through several numerical results. Finally, inter-facial mass and heat transfer models are included to deal with situations when mass and energy transfer between phases is important, and associated industrial numerical benchmarks with tabulated EOS (equations of state) for fluids are performed.Keywords: two-phase flows, numerical simulation, bi-fluid model, unstructured grids, phase appearance and disappearance
Procedia PDF Downloads 394939 Fight against Money Laundering with Optical Character Recognition
Authors: Saikiran Subbagari, Avinash Malladhi
Abstract:
Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition
Procedia PDF Downloads 146938 Seed Quality Aspects of Nightshade (Solanum Nigrum) as Influenced by Gibberellins (GA3) on Seed
Authors: Muga Moses
Abstract:
Plant growth regulators are actively involved in the growth and yield of plants. However, limited information is available on the combined effect of gibberellic acid (GA3) on growth attributes and yield of African nightshade. This experiment will be designed to fill this gap by studying the performance of African nightshade under the application of hormones. Gibberellic acid is a plant growth hormone that promotes cell expansion and division. A greenhouse and laboratory experiment will be conducted at the University of Sussex biotechnology greenhouse and Agriculture laboratory using a growth chamber to study the effect of GA3 on the growth and development attributes of African nightshade. The experiment consists of three replications and 5 treatments and is laid out in a randomized complete block design consisting of various concentrations of GA3. 0ppm, 50ppm, 100ppm, 150ppm and 200ppm. local farmer seed was grown in plastic pots, 6 seeds then hardening off to remain with four plants per pot at the greenhouse to attain purity of germplasm, proper management until maturity of berries then harvesting and squeezing to get seeds, paper dry on the sun for 7 days. In a laboratory, place 5 Whatman filter paper on glass petri-dish subject to different concentrations of stock solution, count 50 certified and clean, healthy seeds, then arrange on the moist filter paper and mark respectively. Spray with the stock solution twice a day and protrusion of radicle termed as germination count and discard to increase the accuracy of precision. Data will be collected on the application of GA3 to compare synergistic effects on the growth, yield, and nutrient contents on African nightshade.Keywords: African nightshade, growth, yield, shoot, gibberellins
Procedia PDF Downloads 91937 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 137936 A Facile Nanocomposite of Graphene Oxide Reinforced Chitosan/Poly-Nitroaniline Polymer as a Highly Efficient Adsorbent for Extracting Polycyclic Aromatic Hydrocarbons from Tea Samples
Authors: Adel M. Al-Shutairi, Ahmed H. Al-Zahrani
Abstract:
Tea is a popular beverage drunk by millions of people throughout the globe. Tea has considerable health advantages, in-cluding antioxidant, antibacterial, antiviral, chemopreventive, and anticarcinogenic properties. As a result of environmental pollution (atmospheric deposition) and the production process, tealeaves may also include a variety of dangerous substances, such as polycyclic aromatic hydrocarbons (PAHs). In this study, graphene oxide reinforced chitosan/poly-nitroaniline polymer was prepared to develop a sensitive and reliable solid phase extraction method (SPE) for extraction of PAH7 in tea samples, followed by high-performance liquid chromatography- fluorescence detection. The prepared adsorbent was validated in terms of linearity, the limit of detection, the limit of quantification, recovery (%), accuracy (%), and precision (%) for the determination of the PAH7 (benzo[a]pyrene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[b]fluoranthene, Dibenzo[a,h]anthracene and Benzo[g,h,i]perylene) in tea samples. The concentration was determined in two types of tea commercially available in Saudi Arabia, including black tea and green tea. The maximum mean of Σ7PAHs in black tea samples was 68.23 ± 0.02 ug kg-1 and 26.68 ± 0.01 ug kg-1 in green tea samples. The minimum mean of Σ7PAHs in black tea samples was 37.93 ± 0.01 ug kg-1 and 15.26 ± 0.01 ug kg-1 in green tea samples. The mean value of benzo[a]pyrene in black tea samples ranged from 6.85 to 12.17 ug kg-1, where two samples exceeded the standard level (10 ug kg-1) established by the European Union (UE), while in green tea ranged from 1.78 to 2.81 ug kg-1. Low levels of Σ7PAHs in green tea samples were detected in comparison with black tea samples.Keywords: polycyclic aromatic hydrocarbons, CS, PNA and GO, black/green tea, solid phase extraction, Saudi Arabia
Procedia PDF Downloads 96935 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 68934 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors
Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras
Abstract:
Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system
Procedia PDF Downloads 123933 Application of Electro-Optical Hybrid Cables in Horizontal Well Production Logging
Authors: Daofan Guo, Dong Yang
Abstract:
For decades, well logging with coiled tubing has relied solely on surface data such as pump pressure, wellhead pressure, depth counter, and weight indicator readings. While this data serves the oil industry well, modern smart logging utilizes real-time downhole information, which automatically increases operational efficiency and optimizes intervention qualities. For example, downhole pressure, temperature, and depth measurement data can be transmitted through the electro-optical hybrid cable in the coiled tubing to surface operators on a real-time base. This paper mainly introduces the unique structural features and various applications of the electro-optical hybrid cables which were deployed into downhole with the help of coiled tubing technology. Fiber optic elements in the cable enable optical communications and distributed measurements, such as distributed temperature and acoustic sensing. The electrical elements provide continuous surface power for downhole tools, eliminating the limitations of traditional batteries, such as temperature, operating time, and safety concerns. The electrical elements also enable cable telemetry operation of cable tools. Both power supply and signal transmission were integrated into an electro-optical hybrid cable, and the downhole information can be captured by downhole electrical sensors and distributed optical sensing technologies, then travels up through an optical fiber to the surface, which greatly improves the accuracy of measurement data transmission.Keywords: electro-optical hybrid cable, underground photoelectric composite cable, seismic cable, coiled tubing, real-time monitoring
Procedia PDF Downloads 147932 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-Fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.Keywords: fact checking, claim verification, deep learning, natural language processing
Procedia PDF Downloads 62931 StockTwits Sentiment Analysis on Stock Price Prediction
Authors: Min Chen, Rubi Gupta
Abstract:
Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing
Procedia PDF Downloads 157930 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 335929 Prediction of B-Cell Epitope for 24 Mite Allergens: An in Silico Approach towards Epitope-Based Immune Therapeutics
Authors: Narjes Ebrahimi, Soheila Alyasin, Navid Nezafat, Hossein Esmailzadeh, Younes Ghasemi, Seyed Hesamodin Nabavizadeh
Abstract:
Immunotherapy with allergy vaccines is of great importance in allergen-specific immunotherapy. In recent years, B-cell epitope-based vaccines have attracted considerable attention and the prediction of epitopes is crucial to design these types of allergy vaccines. B-cell epitopes might be linear or conformational. The prerequisite for the identification of conformational epitopes is the information about allergens' tertiary structures. Bioinformatics approaches have paved the way towards the design of epitope-based allergy vaccines through the prediction of tertiary structures and epitopes. Mite allergens are one of the major allergy contributors. Several mite allergens can elicit allergic reactions; however, their structures and epitopes are not well established. So, B-cell epitopes of various groups of mite allergens (24 allergens in 6 allergen groups) were predicted in the present work. Tertiary structures of 17 allergens with unknown structure were predicted and refined with RaptorX and GalaxyRefine servers, respectively. The predicted structures were further evaluated by Rampage, ProSA-web, ERRAT and Verify 3D servers. Linear and conformational B-cell epitopes were identified with Ellipro, Bcepred, and DiscoTope 2 servers. To improve the accuracy level, consensus epitopes were selected. Fifty-four conformational and 133 linear consensus epitopes were predicted. Furthermore, overlapping epitopes in each allergen group were defined, following the sequence alignment of the allergens in each group. The predicted epitopes were also compared with the experimentally identified epitopes. The presented results provide valuable information for further studies about allergy vaccine design.Keywords: B-cell epitope, Immunotherapy, In silico prediction, Mite allergens, Tertiary structure
Procedia PDF Downloads 160928 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface
Authors: Ping Tan, Xiaomeng Su, Yi Shen
Abstract:
The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean
Procedia PDF Downloads 126927 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite
Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul
Abstract:
The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate
Procedia PDF Downloads 301926 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction
Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie
Abstract:
Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches
Procedia PDF Downloads 216