Search results for: tracking task
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2872

Search results for: tracking task

922 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 160
921 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: bipartite graph, one-mode projection, clustering, web proxy detection

Procedia PDF Downloads 232
920 Beliefs about the God of the Other in Intergroup Conflict: Experimental Results from Israel and Palestine

Authors: Crystal Shackleford, Michael Pasek, Allon Vishkin, Jeremy Ginges

Abstract:

In the Middle East, conflict is often viewed as religiously motivated. In this context, an important question is how we think the religion of the other drives their behavior. If people see conflicts as religious, they may expect the belief of the other to motivate intergroup bias. Beliefs about the motivations of the other impact how we engage with them. Conflict may result if actors believe the other’s religion promotes parochialism. To examine how actors on the ground in Israel-Palestine think about the God of the other as it relates to the other’s behavior towards them, we ran two studies in winter 2019 with an online sample of Jewish Israelis and fieldwork with Palestinians in the West Bank. We asked participants to predict the behavior of an outgroup member participating in an economic game task, dividing the money between themselves and another person, who is either an ingroup or outgroup member. Our experimental manipulation asks participants to predict the behavior of the other when the other is thinking of their God. Both Israelis and Palestinians believed outgroup members would show in-group favoritism, and that group members would give more to their in-group when thinking of their God. We also found that participants thought outgroup members would give more to their own ingroup when thinking of God. In other words, Palestinians predicted that Israelis would give more to fellow Israelis when thinking of God, but also more to Palestinians. Our results suggest that religious belief is seen to promote universal moral reasoning, even in a context with over 70 years of intense conflict. More broadly, this challenges the narrative that religion necessarily motivates intractable conflict.

Keywords: conflict, psychology, religion, meta-cognition, morality

Procedia PDF Downloads 123
919 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition

Authors: Qin Long, Li Xiaoge

Abstract:

The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.

Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network

Procedia PDF Downloads 80
918 Approach for Updating a Digital Factory Model by Photogrammetry

Authors: R. Hellmuth, F. Wehner

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Short-term rescheduling can no longer be handled by on-site inspections and manual measurements. The tight time schedules require up-to-date planning models. Due to the high adaptation rate of factories described above, a methodology for rescheduling factories on the basis of a modern digital factory twin is conceived and designed for practical application in factory restructuring projects. The focus is on rebuild processes. The aim is to keep the planning basis (digital factory model) for conversions within a factory up to date. This requires the application of a methodology that reduces the deficits of existing approaches. The aim is to show how a digital factory model can be kept up to date during ongoing factory operation. A method based on photogrammetry technology is presented. The focus is on developing a simple and cost-effective solution to track the many changes that occur in a factory building during operation. The method is preceded by a hardware and software comparison to identify the most economical and fastest variant. 

Keywords: digital factory model, photogrammetry, factory planning, restructuring

Procedia PDF Downloads 102
917 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation

Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam

Abstract:

Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.

Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model

Procedia PDF Downloads 94
916 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 572
915 Application of GIS Techniques for Analysing Urban Built-Up Growth of Class-I Indian Cities: A Case Study of Surat

Authors: Purba Biswas, Priyanka Dey

Abstract:

Worldwide rapid urbanisation has accelerated city expansion in both developed and developing nations. This unprecedented urbanisation trend due to the increasing population and economic growth has caused challenges for the decision-makers in city planning and urban management. Metropolitan cities, class-I towns, and major urban centres undergo a continuous process of evolution due to interaction between socio-cultural and economic attributes. This constant evolution leads to urban expansion in all directions. Understanding the patterns and dynamics of urban built-up growth is crucial for policymakers, urban planners, and researchers, as it aids in resource management, decision-making, and the development of sustainable strategies to address the complexities associated with rapid urbanisation. Identifying spatio-temporal patterns of urban growth has emerged as a crucial challenge in monitoring and assessing present and future trends in urban development. Analysing urban growth patterns and tracking changes in land use is an important aspect of urban studies. This study analyses spatio-temporal urban transformations and land-use and land cover changes using remote sensing and GIS techniques. Built-up growth analysis has been done for the city of Surat as a case example, using the GIS tools of NDBI and GIS models of the Built-up Urban Density Index and Shannon Entropy Index to identify trends and the geographical direction of transformation from 2005 to 2020. Surat is one of the fastest-growing urban centres in both the state and the nation, ranking as the 4th fastest-growing city globally. This study analyses the dynamics of urban built-up area transformations both zone-wise and geographical direction-wise, in which their trend, rate, and magnitude were calculated for the period of 15 years. This study also highlights the need for analysing and monitoring the urban growth pattern of class-I cities in India using spatio-temporal and quantitative techniques like GIS for improved urban management.

Keywords: urban expansion, built-up, geographic information system, remote sensing, Shannon’s entropy

Procedia PDF Downloads 45
914 Time to Second Line Treatment Initiation Among Drug-Resistant Tuberculosis Patients in Nepal

Authors: Shraddha Acharya, Sharad Kumar Sharma, Ratna Bhattarai, Bhagwan Maharjan, Deepak Dahal, Serpahine Kaminsa

Abstract:

Background: Drug-resistant (DR) tuberculosis (TB) continues to be a threat in Nepal, with an estimated 2800 new cases every year. The treatment of DR-TB with second line TB drugs is complex and takes longer time with comparatively lower treatment success rate than drug-susceptible TB. Delay in treatment initiation for DR-TB patients might further result in unfavorable treatment outcomes and increased transmission. This study thus aims to determine median time taken to initiate second-line treatment among Rifampicin Resistant (RR) diagnosed TB patients and to assess the proportion of treatment delays among various type of DR-TB cases. Method: A retrospective cohort study was done using national routine electronic data (DRTB and TB Laboratory Patient Tracking System-DHIS2) on drug resistant tuberculosis patients between January 2020 and December 2022. The time taken for treatment initiation was computed as– days from first diagnosis as RR TB through Xpert MTB/Rif test to enrollment on second-line treatment. The treatment delay (>7 days after diagnosis) was calculated. Results: Among total RR TB cases (N=954) diagnosed via Xpert nationwide, 61.4% were enrolled under shorter-treatment regimen (STR), 33.0% under longer treatment regimen (LTR), 5.1% for Pre-extensively drug resistant TB (Pre-XDR) and 0.4% for Extensively drug resistant TB (XDR) treatment. Among these cases, it was found that the median time from diagnosis to treatment initiation was 6 days (IQR:2-15.8). The median time was 5 days (IQR:2.0-13.3) among STR, 6 days (IQR:3.0-15.0) among LTR, 30 days (IQR:5.5-66.8) among Pre-XDR and 4 days (IQR:2.5-9.0) among XDR TB cases. The overall treatment delay (>7 days after diagnosis) was observed in 42.4% of the patients, among which, cases enrolled under Pre-XDR contributed substantially to treatment delay (72.0%), followed by LTR (43.6%), STR (39.1%) and XDR (33.3%). Conclusion: Timely diagnosis and prompt treatment initiation remain fundamental focus of the National TB program. The findings of the study, however suggest gaps in timeliness of treatment initiation for the drug-resistant TB patients, which could bring adverse treatment outcomes. Moreover, there is an alarming delay in second line treatment initiation for the Pre-XDR TB patients. Therefore, this study generates evidence to identify existing gaps in treatment initiation and highlights need for formulating specific policies and intervention in creating effective linkage between the RR TB diagnosis and enrollment on second line TB treatment with intensified efforts from health providers for follow-ups and expansion of more decentralized, adequate, and accessible diagnostic and treatment services for DR-TB, especially Pre-XDR TB cases, due to the observed long treatment delays.

Keywords: drug-resistant, tuberculosis, treatment initiation, Nepal, treatment delay

Procedia PDF Downloads 68
913 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 126
912 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach

Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller

Abstract:

Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.

Keywords: calibration model, celitement, cementitious material, NIR spectroscopy

Procedia PDF Downloads 491
911 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase

Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick

Abstract:

This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.

Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers

Procedia PDF Downloads 148
910 Modelling and Control of Milk Fermentation Process in Biochemical Reactor

Authors: Jožef Ritonja

Abstract:

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Keywords: biochemical reactor, fermentation process, modelling, adaptive control

Procedia PDF Downloads 114
909 Frontal Oscillatory Activity and Phase–Amplitude Coupling during Chan Meditation

Authors: Arthur C. Tsai, Chii-Shyang Kuo, Vincent S. C. Chien, Michelle Liou, Philip E. Cheng

Abstract:

Meditation enhances mental abilities and it is an antidote to anxiety. However, very little is known about brain mechanisms and cortico-subcortical interactions underlying meditation-induced anxiety relief. In this study, the changes of phase-amplitude coupling (PAC) in which the amplitude of the beta frequency band were modulated in phase with delta rhythm were investigated after eight-week of meditation training. The study hypothesized that through a concentrate but relaxed mental training the delta-beta coupling in the frontal regions is attenuated. The delta-beta coupling analysis was applied to within and between maximally-independent component sources returned from the extended infomax independent components analysis (ICA) algorithm on the continuous EEG data during mediation. A unique meditative concentration task through relaxing body and mind was used with a constant level of moderate mental effort, so as to approach an ‘emptiness’ meditative state. A pre-test/post-test control group design was used in this study. To evaluate cross-frequency phase-amplitude coupling of component sources, the modulation index (MI) with statistics to calculate circular phase statistics were estimated. Our findings reveal that a significant delta-beta decoupling was observed in a set of frontal regions bilaterally. In addition, beta frequency band of prefrontal component were amplitude modulated in phase with the delta rhythm of medial frontal component.

Keywords: phase-amplitude coupling, ICA, meditation, EEG

Procedia PDF Downloads 411
908 Self-Efficacy and Attitude of the Graduating Pre-Service Teachers as Influenced in Their Student Teaching Performance

Authors: Sonia Arradaza-Pajaron, Maria Aida Manila

Abstract:

Teaching is considered the noblest yet believed to be one of the most complicated and challenging professions. Along this view, every teacher-producing institution should look into producing quality pre-service graduates who are efficacious enough with the right attitude and to deal with the task accorded to them. This study investigated the association between self-efficacy and attitude of graduating pre-service teachers with their actual student teaching performance. Survey questionnaires on self-efficacy and attitude toward practice teaching were fielded to the 90 actual respondents while their practice teaching grade was extracted to serve as the other main variable. Data were analyzed and treated statistically utilizing weighted mean and Pearson r to determine the relationship of variables of the study. Findings revealed that attitude of respondents of the three curricular programs was favorable, and they are self-efficacious. Their practice teaching performance was interpreted as very good. Results further showed a significant positive relationship between their self-efficacy and practice teaching performance. It showed that their rating was a manifestation of self- efficacious group. Although they exude positive attitude towards practice teaching, yet no significant relationship was seen with their attitude and performance. Moreover, data manifested that most of them can pay attention during their conduct of lessons in the class, as well as, listen attentively to their cooperating teachers during post conferences. They can perform student teaching tasks better even when there were other interesting things to do. Most of all, they can regulate or suppress not so pleasant thoughts or feelings and take things lightly even in most challenging situations. As gleaned from the results, it can be concluded that there was an association between self-efficacy and practice teaching performance of the respondents.

Keywords: academic achievement, attitude, self-efficacy, student teaching performance

Procedia PDF Downloads 304
907 A Questionnaire-Based Survey: Therapists Response towards Upper Limb Disorder Learning Tool

Authors: Noor Ayuni Che Zakaria, Takashi Komeda, Cheng Yee Low, Kaoru Inoue, Fazah Akhtar Hanapiah

Abstract:

Previous studies have shown that there are arguments regarding the reliability and validity of the Ashworth and Modified Ashworth Scale towards evaluating patients diagnosed with upper limb disorders. These evaluations depended on the raters’ experiences. This initiated us to develop an upper limb disorder part-task trainer that is able to simulate consistent upper limb disorders, such as spasticity and rigidity signs, based on the Modified Ashworth Scale to improve the variability occurring between raters and intra-raters themselves. By providing consistent signs, novice therapists would be able to increase training frequency and exposure towards various levels of signs. A total of 22 physiotherapists and occupational therapists participated in the study. The majority of the therapists agreed that with current therapy education, they still face problems with inter-raters and intra-raters variability (strongly agree 54%; n = 12/22, agree 27%; n = 6/22) in evaluating patients’ conditions. The therapists strongly agreed (72%; n = 16/22) that therapy trainees needed to increase their frequency of training; therefore believe that our initiative to develop an upper limb disorder training tool will help in improving the clinical education field (strongly agree and agree 63%; n = 14/22).

Keywords: upper limb disorder, clinical education tool, inter/intra-raters variability, spasticity, modified Ashworth scale

Procedia PDF Downloads 302
906 Analysis of the Engineering Judgement Influence on the Selection of Geotechnical Parameters Characteristic Values

Authors: K. Ivandic, F. Dodigovic, D. Stuhec, S. Strelec

Abstract:

A characteristic value of certain geotechnical parameter results from an engineering assessment. Its selection has to be based on technical principles and standards of engineering practice. It has been shown that the results of engineering assessment of different authors for the same problem and input data are significantly dispersed. A survey was conducted in which participants had to estimate the force that causes a 10 cm displacement at the top of a axially in-situ compressed pile. Fifty experts from all over the world took part in it. The lowest estimated force value was 42% and the highest was 133% of measured force resulting from a mentioned static pile load test. These extreme values result in significantly different technical solutions to the same engineering task. In case of selecting a characteristic value of a geotechnical parameter the importance of the influence of an engineering assessment can be reduced by using statistical methods. An informative annex of Eurocode 1 prescribes the method of selecting the characteristic values of material properties. This is followed by Eurocode 7 with certain specificities linked to selecting characteristic values of geotechnical parameters. The paper shows the procedure of selecting characteristic values of a geotechnical parameter by using a statistical method with different initial conditions. The aim of the paper is to quantify an engineering assessment in the example of determining a characteristic value of a specific geotechnical parameter. It is assumed that this assessment is a random variable and that its statistical features will be determined. For this purpose, a survey research was conducted among relevant experts from the field of geotechnical engineering. Conclusively, the results of the survey and the application of statistical method were compared.

Keywords: characteristic values, engineering judgement, Eurocode 7, statistical methods

Procedia PDF Downloads 281
905 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis

Authors: Sara Segura, Diego Nuñez, Miryam Villamil

Abstract:

In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.

Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis

Procedia PDF Downloads 271
904 Refined Edge Detection Network

Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni

Abstract:

Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.

Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone

Procedia PDF Downloads 88
903 The Impact of Maternity Leave Reforms: Evidence from Finland

Authors: Claudia Troccoli

Abstract:

Childbearing constitutes one of the key factors affecting labour market differences between men and women, accounting for almost a quarter of the gender wage gap. Family leave policies, such as maternity, paternity, and parental leave, represent potential key policy tools to address these inequalities, as they can promote mothers' job continuity and career progression. This paper analyses four major reforms implemented in Finland between the 1960s and the early 1980s. It studies the effects of these maternity and parental leave extensions on mothers' short- and long-run labour market outcomes. Eligibility to longer leave was determined on the basis of the child's date of birth. Therefore, estimation of the causal effects of the reforms is possible by exploiting random variation in children's birthdates and comparing the outcomes of mothers giving birth just before and just after the reform cutoff date. Overall, the three maternity leave reforms did not significantly improve mothers' earnings or employment rates. On the contrary, the estimates, although imprecise, seem to indicate negative effects on women's labour market outcomes. The extension of parental leave is, on the other hand, the only reform that improved mothers' short- and long-term labour market outcomes, both in terms of earnings and employment rate. At the same time, fathers appeared to be negatively affected by the reform. These results provide suggestive evidence that shareable parental leave might have more beneficial effects on mothers' job continuity, as it weakens the connotation of childcare as a task reserved for mothers.

Keywords: family policies, Finland, maternal labour market outcomes, maternity leave

Procedia PDF Downloads 125
902 Anomaly Detection of Log Analysis using Data Visualization Techniques for Digital Forensics Audit and Investigation

Authors: Mohamed Fadzlee Sulaiman, Zainurrasyid Abdullah, Mohd Zabri Adil Talib, Aswami Fadillah Mohd Ariffin

Abstract:

In common digital forensics cases, investigation may rely on the analysis conducted on specific and relevant exhibits involved. Usually the investigation officer may define and advise digital forensic analyst about the goals and objectives to be achieved in reconstructing the trail of evidence while maintaining the specific scope of investigation. With the technology growth, people are starting to realize the importance of cyber security to their organization and this new perspective creates awareness that digital forensics auditing must come in place in order to measure possible threat or attack to their cyber-infrastructure. Instead of performing investigation on incident basis, auditing may broaden the scope of investigation to the level of anomaly detection in daily operation of organization’s cyber space. While handling a huge amount of data such as log files, performing digital forensics audit for large organization proven to be onerous task for the analyst either to analyze the huge files or to translate the findings in a way where the stakeholder can clearly understand. Data visualization can be emphasized in conducting digital forensic audit and investigation to resolve both needs. This study will identify the important factors that should be considered to perform data visualization techniques in order to detect anomaly that meet the digital forensic audit and investigation objectives.

Keywords: digital forensic, data visualization, anomaly detection , log analysis, forensic audit, visualization techniques

Procedia PDF Downloads 269
901 Knowledge Based Behaviour Modelling and Execution in Service Robotics

Authors: Suraj Nair, Aravindkumar Vijayalingam, Alexander Perzylo, Alois Knoll

Abstract:

In the last decade robotics research and development activities have grown rapidly, especially in the domain of service robotics. Integrating service robots into human occupied spaces such as homes, offices, hospitals, etc. has become increasingly worked upon. The primary motive is to ease daily lives of humans by taking over some of the household/office chores. However, several challenges remain in systematically integrating such systems in human shared work-spaces. In addition to sensing and indoor-navigation challenges, programmability of such systems is a major hurdle due to the fact that the potential user cannot be expected to have knowledge in robotics or similar mechatronic systems. In this paper, we propose a cognitive system for service robotics which allows non-expert users to easily model system behaviour in an underspecified manner through abstract tasks and objects associated with them. The system uses domain knowledge expressed in the form of an ontology along with logical reasoning mechanisms to infer all the missing pieces of information required for executing the tasks. Furthermore, the system is also capable of recovering from failed tasks arising due to on-line disturbances by using the knowledge base and inferring alternate methods to execute the same tasks. The system is demonstrated through a coffee fetching scenario in an office environment using a mobile robot equipped with sensors and software capabilities for autonomous navigation and human-interaction through natural language.

Keywords: cognitive robotics, reasoning, service robotics, task based systems

Procedia PDF Downloads 225
900 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing

Procedia PDF Downloads 371
899 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 96
898 Light-Weight Network for Real-Time Pose Estimation

Authors: Jianghao Hu, Hongyu Wang

Abstract:

The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).

Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone

Procedia PDF Downloads 140
897 Voice Liveness Detection Using Kolmogorov Arnold Networks

Authors: Arth J. Shah, Madhu R. Kamble

Abstract:

Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.

Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection

Procedia PDF Downloads 20
896 Design of a Permanent Magnet Based Focusing Lens for a Miniature Klystron

Authors: Kumud Singh, Janvin Itteera, Priti Ukarde, Sanjay Malhotra, P. PMarathe, Ayan Bandyopadhay, Rakesh Meena, Vikram Rawat, L. M. Joshi

Abstract:

Application of Permanent magnet technology to high frequency miniature klystron tubes to be utilized for space applications improves the efficiency and operational reliability of these tubes. But nevertheless the task of generating magnetic focusing forces to eliminate beam divergence once the beam crosses the electrostatic focusing regime and enters the drift region in the RF section of the tube throws several challenges. Building a high quality magnet focusing lens to meet beam optics requirement in cathode gun and RF interaction region is considered to be one of the critical issues for these high frequency miniature tubes. In this paper, electromagnetic design and particle trajectory studies in combined electric and magnetic field for optimizing the magnetic circuit using 3D finite element method (FEM) analysis software is presented. A rectangular configuration of the magnet was constructed to accommodate apertures for input and output waveguide sections and facilitate coupling of electromagnetic fields into the input klystron cavity and out from output klystron cavity through coupling loops. Prototype lenses have been built and have been tested after integration with the klystron tube. We discuss the design requirements and challenges, and the results from beam transmission of the prototype lens.

Keywords: beam transmission, Brillouin, confined flow, miniature klystron

Procedia PDF Downloads 435
895 Woodcast Is Ecologically Sound and Tolerated by Majority of Patients

Authors: R. Hassan, J. Duncombe, E. Darke, A. Dias, K. Anderson, R. G. Middleton

Abstract:

Background: NHS England has set itself the task of delivering a “Net Zero” National Health service by 2040. It is incumbent upon all health care practioners to work towards this goal. Orthopaedic surgeons are no exception. Distal radial fractures are the most common fractures sustained by the adult population. However, studiesare shortcoming on individual patient experience. The aim of this study was to assess the patient’ssatisfaction and outcomes with woodcast used in the conservative management of distal radius fractures. Methods: For all patients managed with woodcast in our unit, we undertook a structured questionnairethat included the Patient Rated Wrist Evaluation (PRWE) score, The EQ-5D-5L score, and the pain numerical score at the time of injury and six weeks after. Results: 30 patients were initially managed with woodcast.80% of patients tolerated woodcast for the full duration of their treatment. Of these, 20% didn’t tolerate woodcast and had their casts removed within 48 hours. Of the remaining, 79.1% were satisfied about woodcast comfort, 66% were very satisfied about woodcast weight, 70% were satisfied with temperature and sweatiness, 62.5% were very satisfied about the smell/odour, and 75% were satisfied about the level of support woodcast provided. During their treatment, 83.3% of patients rated their pain as five or less. Conclusion: For those who completed their treatment in woodcast, none required any further intervention or utilised the open appointment because of ongoing wrist problems. In conclusion, when woodcast is tolerated, patients’ satisfaction and outcome levels were good. However, we acknowledged 20% of patients in our series were not able to tolerate woodacst, Therefore, we suggest a comparison between the widely used synthetic plaster of Pariscasting and woodcast to come in order.

Keywords: distal radius fractures, ecological cast, sustainability, woodcast

Procedia PDF Downloads 75
894 Raising Linguistic Awareness through Metalinguistic Written Corrective Feedback

Authors: Orit Zeevy-Solovey

Abstract:

Grammar has traditionally been taught for its own sake, emphasizing rules and drills. However, in recent years, more emphasis is given to communicative competence. Current research suggests that form-focused instruction is notably efficient when incorporated in a meaningful communicative context. It is maintained that writing tasks related to the students’ academic fields will encourage them to express themselves openly in topics that are close to their hearts, without feeling too uneasy about grammatical forms. The teacher can further reduce students’ apprehension of grammar by announcing that credit will be given for merely doing the task and that grammar mistakes will not affect the grade. Students’ linguistic errors can then be corrected by giving metalinguistic feedback which involves providing learners with some kind of explicit remark about the nature of the errors they have made. Research has also shown that learners’ developmental readiness is an important factor influencing the effectiveness of written corrective feedback. Larger effect sizes appear as the proficiency level is higher. The purposes of this paper are to demonstrate how grammar can be taught indirectly through writing tasks, and more specifically, how the use of metalinguistic written corrective feedback given to advanced English as a Foreign Language (EFL) students can raise their linguistic awareness. Since errors are not directly corrected, the students have to work out the corrections needed through exploring grammar books and websites. Longitudinal studies of metalinguistic written corrective feedback comparing the number of errors in students’ first and fourth compositions have shown a decrease in errors.

Keywords: EFL, linguistic awareness, metalinguistic corrective feedback, teaching grammar through writing

Procedia PDF Downloads 118
893 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 137