Search results for: image processing techniques
10544 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 10810543 Accelerating Side Channel Analysis with Distributed and Parallelized Processing
Authors: Kyunghee Oh, Dooho Choi
Abstract:
Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.Keywords: DPA, distributed computing, parallelized processing, side channel analysis
Procedia PDF Downloads 42710542 Clay Palm Press: A Technique of Hand Building in Ceramics for Developing Conceptual Forms
Authors: Okewu E. Jonathan
Abstract:
There are several techniques of production in the field of ceramics. These different techniques overtime have been categorised under three methods of production which includes; casting, throwing and hand building. Hand building method of production is further broken down into other techniques and they include coiling, slabbing and pinching. Ceramic artists find the different hand building techniques to be very interesting, practicable and rewarding. This has encouraged ceramic artist in their various studios at different levels to experiment for further hand building techniques that could be unique and unusual. The art of “Clay Palm Press” is a development from studio experiment in a quest for uniqueness in conceptual ceramic practise. Clay palm press is a technique that requires no formal tutelage but at the same time, it is not easily comprehensible when viewed. It is a practice of putting semi-solid clay in the palm and inserting a closed fist pressure so as to take the imprint of the human palm. This clay production from the palm when dried, fired and explored into an art, work reveals an absolute awesomeness of what the palm imprint could result in.Keywords: ceramics, clay palm press, conceptual forms, hand building, technique
Procedia PDF Downloads 28010541 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements
Authors: Eseldin Keleb
Abstract:
In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.Keywords: sieving, reliability, particle size distribution, processing parameters
Procedia PDF Downloads 61310540 Comprehensive Evaluation of COVID-19 Through Chest Images
Authors: Parisa Mansour
Abstract:
The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT
Procedia PDF Downloads 5710539 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains
Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou
Abstract:
Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing
Procedia PDF Downloads 27010538 Enhancing Seawater Desalination Efficiency with Combined Reverse Osmosis and Vibratory Shear-Enhanced Processing for Higher Conversion Rates and Reduced Energy Consumption
Authors: Reda Askouri, Mohamed Moussetad, Rhma Adhiri
Abstract:
Reverse osmosis (RO) is one of the most widely used techniques for seawater desalination. However, the conversion rate of this method is generally limited to 35-45% due to the high-pressure capacity of the membranes. Additionally, the specific energy consumption (SEC) for seawater desalination is high, necessitating energy recovery systems to minimise energy consumption. This study aims to enhance the performance of seawater desalination by combining RO with a vibratory shear-enhanced processing (VSEP) technique. The RO unit in this study comprises two stages, each powered by a hydraulic turbocharger that increases the pressure in both stages. The concentrate from the second stage is then directly processed by VSEP technology. The results demonstrate that the permeate water obtained exhibits high quality and that the conversion rate is significantly increased, reaching high percentages with low SEC. Furthermore, the high concentration of total solids in the concentrate allows for potential exploitation within the environmental protection framework. By valorising the concentrated waste, it’s possible to reduce the environmental impact while increasing the overall efficiency of the desalination process.Keywords: specific energy consumption, vibratory shear enhanced process, environmental challenge, water recovery
Procedia PDF Downloads 1210537 A Review on Existing Challenges of Data Mining and Future Research Perspectives
Authors: Hema Bhardwaj, D. Srinivasa Rao
Abstract:
Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges
Procedia PDF Downloads 11010536 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 14610535 Urdu Text Extraction Method from Images
Authors: Samabia Tehsin, Sumaira Kausar
Abstract:
Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results.Keywords: caption text, content-based image retrieval, document analysis, text extraction
Procedia PDF Downloads 51610534 Scar Removal Stretegy for Fingerprint Using Diffusion
Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong
Abstract:
Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion
Procedia PDF Downloads 51610533 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 9610532 Analysis of Erosion Quantity on Application of Conservation Techniques in Ci Liwung Hulu Watershed
Authors: Zaenal Mutaqin
Abstract:
The level of erosion that occurs in the upsteam watersheed will lead to limited infiltrattion, land degradation and river trivialisation and estuaries in the body. One of the watesheed that has been degraded caused by using land is the DA Ci Liwung Upstream. The high degradation that occurs in the DA Ci Liwung upstream is indicated by the hugher rate of erosion on the region, especially in the area of agriculture. In this case, agriculture cultivation intent to the agricultural land that has been applied conservation techniques. This study is applied to determine the quantity of erosion by reviewing Hidrologic Response Unit (HRU) in agricuktural cultivation land which is contained in DA Ci Liwung upstream by using the Soil and Water Assessmen Tool (SWAT). Conservation techniques applied are terracing, agroforestry and gulud terrace. It was concluded that agroforestry conservation techniques show the best value of erosion (lowest) compared with other conservation techniques with the contribution of erosion of 25.22 tonnes/ha/year. The results of the calibration between the discharge flow models with the observation that R²=0.9014 and NS=0.79 indicates that this model is acceptable and feasible applied to the Ci Liwung Hulu watershed.Keywords: conservation, erosion, SWAT analysis, watersheed
Procedia PDF Downloads 29210531 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints
Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich
Abstract:
Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void
Procedia PDF Downloads 12210530 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation
Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang
Abstract:
The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics
Procedia PDF Downloads 13310529 E-Learning Platform for School Kids
Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.
Abstract:
E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.Keywords: math, education games, e-learning platform, artificial intelligence
Procedia PDF Downloads 15610528 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase
Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick
Abstract:
This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers
Procedia PDF Downloads 16110527 Video Heart Rate Measurement for the Detection of Trauma-Related Stress States
Authors: Jarek Krajewski, David Daxberger, Luzi Beyer
Abstract:
Finding objective and non-intrusive measurements of emotional and psychopathological states (e.g., post-traumatic stress disorder, PTSD) is an important challenge. Thus, the proposed approach here uses Photoplethysmographic imaging (PPGI) applying facial RGB Cam videos to estimate heart rate levels. A pipeline for the signal processing of the raw image has been proposed containing different preprocessing approaches, e.g., Independent Component Analysis, Non-negative Matrix factorization, and various other artefact correction approaches. Under resting and constant light conditions, we reached a sensitivity of 84% for pulse peak detection. The results indicate that PPGI can be a suitable solution for providing heart rate data derived from these indirectly post-traumatic stress states.Keywords: heart rate, PTSD, PPGI, stress, preprocessing
Procedia PDF Downloads 12410526 Relative Clause Attachment Ambiguity Resolution in L2: the Role of Semantics
Authors: Hamideh Marefat, Eskandar Samadi
Abstract:
This study examined the effect of semantics on processing ambiguous sentences containing Relative Clauses (RCs) preceded by a complex Determiner Phrase (DP) by Persian-speaking learners of L2 English with different proficiency and Working Memory Capacities (WMCs). The semantic relationship studied was one between the subject of the main clause and one of the DPs in the complex DP to see if, as predicted by Spreading Activation Model, priming one of the DPs through this semantic manipulation affects the L2ers’ preference. The results of a task using Rapid Serial Visual Processing (time-controlled paradigm) showed that manipulation of the relationship between the subject of the main clause and one of the DPs in the complex DP preceding RC has no effect on the choice of the antecedent; rather, the L2ers' processing is guided by the phrase structure information. Moreover, while proficiency did not have any effect on the participants’ preferences, WMC brought about a difference in their preferences, with a DP1 preference by those with a low WMC. This finding supports the chunking hypothesis and the predicate proximity principle, which is the strategy also used by monolingual Persian speakers.Keywords: semantics, relative clause processing, ambiguity resolution, proficiency, working memory capacity
Procedia PDF Downloads 62310525 Influence of Chemical Processing Treatment on Handle Properties of Worsted Suiting Fabric
Authors: Priyanka Lokhande, Ram P. Sawant, Ganesh Kakad, Avinash Kolhatkar
Abstract:
In order to evaluate the influence of chemical processing on low-stress mechanical properties and fabric hand of worsted cloth, eight worsted suiting fabric samples of balance plain and twill weave were studied. The Kawabata KES-FB system has been used for the measurement of low-stress mechanical properties of before and after chemically processed worsted suiting fabrics. Primary hand values and Total Hand Values (THV) of before and after chemically processed worsted suiting fabrics were calculated using the KES-FB test data. Upon statistical analysis, it is observed that chemical processing has considerable influence on the low-stress mechanical properties and thereby on handle properties of worsted suiting fabrics. Improvement in the Total Hand Values (THV) after chemical processing is experienced in most of fabric samples.Keywords: low stress mechanical properties, plain and twill weave, total hand value (THV), worsted suiting fabric
Procedia PDF Downloads 28210524 A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories
Authors: Nabilah Ibrahim, Khaliza Musa
Abstract:
The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI.Keywords: B-mode Ultrasound Image, carotid artery diameter, canny edge detection, body mass index
Procedia PDF Downloads 44410523 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 34010522 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload
Authors: V. Vicente E. Mujica, Gustavo Gonzalez
Abstract:
The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.Keywords: terrestrial-satellite networks, latency, on-orbit satellite payload, simulation
Procedia PDF Downloads 27110521 Load Management Using Multiple Sequential Load Shaping Techniques
Authors: Amira M. Attia, Karim H. Youssef, Nabil H. Abbasi
Abstract:
Demand Side Management (DSM) is an essential characteristic of current and future smart grid systems. As one of DSM functions, load management aims to control customers’ total electric consumption and utility’s load factor by using various load shaping techniques. However, applying load shaping techniques such as load shifting, peak clipping, or strategic conservation individually does not provide the desired level of improvement for load factor increment and/or customer’s bill reduction. In this paper, two load shaping techniques will be simulated as constrained optimization problems. The purpose is to reflect the application of combined load shifting and strategic conservation model together at the same time, and the application of combined load shifting and peak clipping model as well. The problem will be formulated and solved by using disciplined convex programming (CVX) based MATLAB® R2013b. Simulation results will be evaluated and compared for studying the most impactful multi-techniques model in improving load curve.Keywords: convex programing, demand side management, load shaping, multiple, building energy optimization
Procedia PDF Downloads 31310520 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance
Procedia PDF Downloads 35910519 Correlates of Income Generation of Small-Scale Fish Processors in Abeokuta Metropolis, Ogun State, Nigeria
Authors: Ayodeji Motunrayo Omoare
Abstract:
Economically fish provides an important source of food and income for both men and women especially many households in the developing world and fishing has an important social and cultural position in river-rine communities. However, fish is highly susceptible to deterioration. Consequently, this study was carried out to correlate income generation of small-scale women fish processors in Abeokuta metropolis, Ogun State, Nigeria. Eighty small-scale women fish processors were randomly selected from five communities as the sample size for this study. Collected data were analyzed using both descriptive and inferential statistics. The results showed that the mean age of the respondents was 31.75 years with average household size of 4 people while 47.5% of the respondents had primary education. Most (86.3%) of the respondents were married and had spent more than 11 years in fish processing. The respondents were predominantly Yoruba tribe (91.2%). Majority (71.3%) of the respondents used traditional kiln for processing their fish while 23.7% of the respondents used hot vegetable oil to fry their fish. Also, the result revealed that respondents sourced capital from Personal Savings (48.8%), Cooperatives (27.5%), Friends and Family (17.5%) and Microfinance Banks (6.2%) for fish processing activities. The respondents generated an average income of ₦7,000.00 from roasted fish, ₦3,500.00 from dried fish, and ₦5,200.00 from fried fish daily. However, inadequate processing equipment (95.0%), non-availability of credit facility from microfinance banks (85.0%), poor electricity supply (77.5%), inadequate extension service support (70.0%), and fuel scarcity (68.7%) were major constraints to fish processing in the study area. Results of chi-square analysis showed that there was a significant relationship between personal characteristics (χ2 = 36.83, df = 9), processing methods (χ2 = 15.88, df = 3) and income generated at p < 0.05 level of significance. It can be concluded that significant relationship existed between processing methods and income generated. The study, therefore, recommends that modern processing equipment should be made available to the respondents at a subsidized price by the agro-allied companies.Keywords: correlates, income, fish processors, women, small-scale
Procedia PDF Downloads 24510518 Spatio-Temporal Land Cover Changes Monitoring Using Remotely Sensed Techniques in Riyadh Region, KSA
Authors: Abdelrahman Elsehsah
Abstract:
Land Use and Land Cover (LULC) dynamics in Riyadh over a decade were comprehensively analyzed using the Google Earth Engine (GEE) platform. By harnessing the Landsat 8 Image collection and night-time light image collection from May to August for the years 2013 and 2023, we were able to generate insightful datasets capturing the changing landscape of the region. Our approach involved a Random Forest (RF) classification model that consistently displayed commendable precision scores above 92% for both years. A notable discovery from the study was the pronounced urban expansion, particularly around Riyadh city. Within a mere ten-year span, urbanization surged noticeably, affecting the broader ecological environment of the region. Interestingly, the northeastern part of Riyadh emerged as a focal point of this growth, signaling rapid urban growth of urban sprawl and development. A comparison between the two years indicates a 21.51% increase in built-up areas, revealing the transformative pace of urban sprawl. Contrastingly, vegetation cover patterns presented a more nuanced picture. While our initial hypothesis predicted a decline in vegetation, the actual findings depicted both vegetation reduction in certain pockets and new growth in others, resulting in an overall 25.89% increase. This intricate pattern might be attributed to shifting agricultural practices, afforestation efforts, or even satellite image timings not aligning with seasonal vegetation growth. The bare soil, predominant in the desert landscape of Riyadh, saw a marginal reduction of 0.37% over the decade, challenging our initial expectations. Urban and agricultural advancements in Saudi Arabia appear to have slightly reduced the expanse of barren terrains. This study, underpinned by a rigorous methodological framework, reveals the multifaceted land cover changes in Riyadh in response to urban development and environmental factors. The precise, data-driven insights provided by our analysis serve as invaluable tools for understanding urban growth trajectories, guiding urban planning, policy formulation, and sustainable development endeavors in the region.Keywords: remote sensing, KSA, ArcGIS, spatio-temporal
Procedia PDF Downloads 3510517 Futuristic Black Box Design Considerations and Global Networking for Real Time Monitoring of Flight Performance Parameters
Authors: K. Parandhama Gowd
Abstract:
The aim of this research paper is to conceptualize, discuss, analyze and propose alternate design methodologies for futuristic Black Box for flight safety. The proposal also includes global networking concepts for real time surveillance and monitoring of flight performance parameters including GPS parameters. It is expected that this proposal will serve as a failsafe real time diagnostic tool for accident investigation and location of debris in real time. In this paper, an attempt is made to improve the existing methods of flight data recording techniques and improve upon design considerations for futuristic FDR to overcome the trauma of not able to locate the block box. Since modern day communications and information technologies with large bandwidth are available coupled with faster computer processing techniques, the attempt made in this paper to develop a failsafe recording technique is feasible. Further data fusion/data warehousing technologies are available for exploitation.Keywords: flight data recorder (FDR), black box, diagnostic tool, global networking, cockpit voice and data recorder (CVDR), air traffic control (ATC), air traffic, telemetry, tracking and control centers ATTTCC)
Procedia PDF Downloads 57210516 3D Remote Sensing Images Parallax Refining Based On HTML5
Authors: Qian Pei, Hengjian Tong, Weitao Chen, Hai Wang, Yanrong Feng
Abstract:
Horizontal parallax is the foundation of stereoscopic viewing. However, the human eye will feel uncomfortable and it will occur diplopia if horizontal parallax is larger than eye separation. Therefore, we need to do parallax refining before conducting stereoscopic observation. Although some scholars have been devoted to online remote sensing refining, the main work of image refining is completed on the server side. There will be a significant delay when multiple users access the server at the same time. The emergence of HTML5 technology in recent years makes it possible to develop rich browser web application. Authors complete the image parallax refining on the browser side based on HTML5, while server side only need to transfer image data and parallax file to browser side according to the browser’s request. In this way, we can greatly reduce the server CPU load and allow a large number of users to access server in parallel and respond the user’s request quickly.Keywords: 3D remote sensing images, parallax, online refining, rich browser web application, HTML5
Procedia PDF Downloads 46110515 Velocity Distribution in Open Channels with Sand: An Experimental Study
Authors: E. Keramaris
Abstract:
In this study, laboratory experiments in open channel flows over a sand bed were conducted. A porous bed (sand bed) with porosity of ε=0.70 and porous thickness of s΄=3 cm was tested. Vertical distributions of velocity were evaluated by using a two-dimensional (2D) Particle Image Velocimetry (PIV). Velocity profiles are measured above the impermeable bed and above the sand bed for the same different total water heights (h= 6, 8, 10 and 12 cm) and for the same slope S=1.5. Measurements of mean velocity indicate the effects of the bed material used (sand bed) on the flow characteristics (Velocity distribution and Reynolds number) in comparison with those above the impermeable bed.Keywords: particle image velocimetry, sand bed, velocity distribution, Reynolds number
Procedia PDF Downloads 374