Search results for: text segmentation
1012 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces
Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet
Abstract:
In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.Keywords: dropwise condensation, textured surface, image processing, watershed
Procedia PDF Downloads 2241011 Real-Time Pothole Detection Using YOLOv11
Authors: Kosuri Harshitha Durga, Ritesh Yaduwanshi
Abstract:
Potholes are one of the most significant problems that affect road safety and the quality of infrastructure. The aim of pothole detection using OpenCV is to design an automated system that will detect and create a map of potholes on the road surfaces to improve the safety of roads and ease the maintenance process. This system is based on high-powered computer vision methods that use still images or video footage taken by cameras located in cars or drones. This paper presents an analysis of the implementation of the YOLOv11 model in pedestrian detection and demonstrates greater effectiveness of this method in regards to accuracy, speed, and efficiency of inference. The improved system now supports enhanced prompt diagnosis and timely repair leaving little or no damage on the infrastructure and also ensuring that enhanced road safety is achieved. This technology can also be used as a safety feature for the car itself by being installed in ADAS systems that would alert drivers in real-time while driving to avoid driving over potholes.Keywords: deep learning, Potholes, segmentation, object detection, YOLO
Procedia PDF Downloads 51010 Typology of Gaming Tourists Based on the Perception of Destination Image
Authors: Mi Ju Choi
Abstract:
This study investigated the perception of gaming tourists toward Macau and developed a typology of gaming tourists. The 1,497 responses from tourists in Macau were collected through convenience sampling method. The dimensions of multi-culture, convenience, economy, gaming, and unsafety, were subsequently extracted as the factors of perception of gaming tourists in Macau. Cluster analysis was performed using the delineated factors (perception of tourists on Macau). Four heterogonous groups were generated, namely, gaming lovers (n = 467, 31.2%), exotic lovers (n = 509, 34.0%), reasonable budget seekers (n = 269, 18.0%), and convenience seekers (n = 252, 16.8%). Further analysis was performed to investigate any difference in gaming behavior and tourist activities. The findings are expected to contribute to the efforts of destination marketing organizations (DMOs) in establishing effective business strategies, provide a profile of gaming tourists in certain market segments, and assist DMOs and casino managers in establishing more effective marketing strategies for target markets.Keywords: destination image, gaming tourists, Macau, segmentation
Procedia PDF Downloads 3021009 An Exploratory Study of Potential Cruisers Preferences Using Choice Experiment and Latent Class Modelling
Authors: Renuka Mahadevan, Sharon Chang
Abstract:
This exploratory study is based on potential cruisers’ monetary valuation of cruise attributes. Using choice experiment, monetary trade-offs between four different cruise attributes are examined with Australians as a case study. We found 50% of the sample valued variety of onboard cruise activities the least while 30% were willing to pay A$87 for cruise-organised activities per day, and the remaining 20% regarded an ocean view to be most valuable at A$125. Latent class modelling was then applied and results revealed that potential cruisers’ valuation of the attributes can be used to segment the market into adventurers, budget conscious and comfort lovers. Evidence showed that socio demographics are not as insightful as lifestyle preferences in developing cruise packages and pricing that would appeal to potential cruisers. Marketing also needs to counter the mindset of potential cruisers’ belief that cruises are often costly and that cruising can be done later in life.Keywords: latent class modelling, choice experiment, potential cruisers, market segmentation, willingness to pay
Procedia PDF Downloads 821008 The Making of a Yijing (Classic of Changes) Cultural Sphere in Asia
Authors: Ng Wai Ming
Abstract:
The Yijing (Classic of Changes) is one of the most influential Chinese classics, and its text, images and divination have been widely studied and used by different people in the world from past to present. Its impact in Asia has been particularly strong due to cultural and geographical proximity. Based on many years of textual study of the history of the Yijing in the Sinosphere, the author attempts to identify various levels of acceptance and localization of the Yijing in different Asian regions, including Japan, Korea, the Ryukyu Kingdom, Vietnam, Mongolia and Tibet. It will create a new concept of “Yijing cultural sphere” to explain the popularization and indigenization of the Yijing in Asia.Keywords: classic of changes, asia, sinosphere, localization
Procedia PDF Downloads 621007 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 1221006 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University
Authors: Saadia Elamin
Abstract:
The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools
Procedia PDF Downloads 4761005 Using Textual Pre-Processing and Text Mining to Create Semantic Links
Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo
Abstract:
This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.Keywords: semantic links, data mining, linked data, SKOS
Procedia PDF Downloads 1811004 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia
Authors: Rohan Bhasin
Abstract:
Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM
Procedia PDF Downloads 1641003 Combining Corpus Linguistics and Critical Discourse Analysis to Study Power Relations in Hindi Newspapers
Authors: Vandana Mishra, Niladri Sekhar Dash, Jayshree Charkraborty
Abstract:
This present paper focuses on the application of corpus linguistics techniques for critical discourse analysis (CDA) of Hindi newspapers. While Corpus linguistics is the study of language as expressed in corpora (samples) of 'real world' text, CDA is an interdisciplinary approach to the study of discourse that views language as a form of social practice. CDA has mainly been studied from a qualitative perspective. However, we can say that recent studies have begun combining corpus linguistics with CDA in analyzing large volumes of text for the study of existing power relations in society. The corpus under our study is also of a sizable amount (1 million words of Hindi newspaper texts) and its analysis requires an alternative analytical procedure. So, we have combined both the quantitative approach i.e. the use of corpus techniques with CDA’s traditional qualitative analysis. In this context, we have focused on the Keyword Analysis Sorting Concordance Lines of the selected Keywords and calculating collocates of the keywords. We have made use of the Wordsmith Tool for all these analysis. The analysis starts with identifying the keywords in the political news corpus when compared with the main news corpus. The keywords are extracted from the corpus based on their keyness calculated through statistical tests like chi-squared test and log-likelihood test on the frequent words of the corpus. Some of the top occurring keywords are मोदी (Modi), भाजपा (BJP), कांग्रेस (Congress), सरकार (Government) and पार्टी (Political party). This is followed by the concordance analysis of these keywords which generates thousands of lines but we have to select few lines and examine them based on our objective. We have also calculated the collocates of the keywords based on their Mutual Information (MI) score. Both concordance and collocation help to identify lexical patterns in the political texts. Finally, all these quantitative results derived from the corpus techniques will be subjectively interpreted in accordance to the CDA’s theory to examine the ways in which political news discourse produces social and political inequality, power abuse or domination.Keywords: critical discourse analysis, corpus linguistics, Hindi newspapers, power relations
Procedia PDF Downloads 2251002 IT-Based Global Healthcare Delivery System: An Alternative Global Healthcare Delivery System
Authors: Arvind Aggarwal
Abstract:
We have developed a comprehensive global healthcare delivery System based on information technology. It has medical consultation system where a virtual consultant can give medical consultation to the patients and Doctors at the digital medical centre after reviewing the patient’s EMR file consisting of patient’s history, investigations in the voice, images and data format. The system has the surgical operation system too, where a remote robotic consultant can conduct surgery at the robotic surgical centre. The instant speech and text translation is incorporated in the software where the patient’s speech and text (language) can be translated into the consultant’s language and vice versa. A consultant of any specialty (surgeon or Physician) based in any country can provide instant health care consultation, to any patient in any country without loss of time. Robotic surgeons based in any country in a tertiary care hospital can perform remote robotic surgery, through patient friendly telemedicine and tele-surgical centres. The patient EMR, financial data and data of all the consultants and robotic surgeons shall be stored in cloud. It is a complete comprehensive business model with healthcare medical and surgical delivery system. The whole system is self-financing and can be implemented in any country. The entire system uses paperless, filmless techniques. This eliminates the use of all consumables thereby reduces substantial cost which is incurred by consumables. The consultants receive virtual patients, in the form of EMR, thus the consultant saves time and expense to travel to the hospital to see the patients. The consultant gets electronic file ready for reporting & diagnosis. Hence time spent on the physical examination of the patient is saved, the consultant can, therefore, spend quality time in studying the EMR/virtual patient and give his instant advice. The time consumed per patient is reduced and therefore can see more number of patients, the cost of the consultation per patients is therefore reduced. The additional productivity of the consultants can be channelized to serve rural patients devoid of doctors.Keywords: e-health, telemedicine, telecare, IT-based healthcare
Procedia PDF Downloads 1811001 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 3891000 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 164999 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.Keywords: regression, piecewise, Bayesian, reversible Jump MCMC
Procedia PDF Downloads 521998 Exploration of the Protection Theory of Chinese Scenic Heritage Based on Local Chronicles
Authors: Mao Huasong, Tang Siqi, Cheng Yu
Abstract:
The cognition and practice of Chinese landscapes have distinct uniqueness. The intergenerational inheritance of urban and rural landscapes is a common objective fact which has created a unique type of heritage in China - scenic heritage. The current generalization of the concept of scenic heritage has affected the lack of innovation in corresponding protection practices. Therefore, clarifying the concepts and connotations of scenery and scenic heritage, clarifying the protection objects of scenic heritage and the methods and approaches in intergenerational inheritance can provide theoretical support for the practice of Chinese scenic heritage and contribute Chinese wisdom to the transformation of world heritage sites. Taking ancient Shaoxing, which has a long time span and rich descriptions of scenic types and quantities, as the research object and using local chronicles as the basic research material, based on text analysis, word frequency analysis, case statistics, and historical, geographical spatial annotation methods, this study traces back to ancient scenic practices and conducts in-depth descriptions in both text and space. it have constructed a scenic heritage identification method based on the basic connotation characteristics and morphological representation characteristics of natural and cultural correlations, combined with the intergenerational and representative characteristics of scenic heritage; Summarized the bidirectional integration of "scenic spots" and "form scenic spots", "outstanding people" and "local spirits" in the formation process of scenic heritage; In inheritance, guided by Confucian values of education; In communication, the cultural interpretation constructed by scenery and the way of landscape life are used to strengthen the intergenerational inheritance of natural, artificial material elements, and intangible spirits. As a unique type of heritage in China, scenic heritage should improve its standards, values, and connotations in current protection practices and actively absorb historical experience.Keywords: scenic heritage, heritage protection, cultural landscape, shaoxing, chinese landscape
Procedia PDF Downloads 70997 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: lidar, segmentation, clustering, tracking
Procedia PDF Downloads 426996 Caregiver Training Results in Accurate Reporting of Stool Frequency
Authors: Matthew Heidman, Susan Dallabrida, Analice Costa
Abstract:
Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials
Procedia PDF Downloads 96995 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 124994 A Randomized, Controlled Trial to Test Habit Formation Theory for Low Intensity Physical Exercise Promotion in Older Adults
Authors: Patrick Louie Robles, Jerry Suls, Ciaran Friel, Mark Butler, Samantha Gordon, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson
Abstract:
Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence finds increasing physical activity is positively associated with health benefits. Behavior change techniques (BCTs) have demonstrated some effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a personalized trials (N-of-1) design, delivered virtually, to evaluate the efficacy of using five BCTs in increasing low-intensity physical activity (by 2,000 steps of walking per day) in adults aged 45-75 years old. The 5 BCTs described in habit formation theory are goal setting, action planning, rehearsal, rehearsal in a consistent context, and self-monitoring. The study recruited health system employees in the target age range who had no mobility restrictions and expressed interest in increasing their daily activity by a minimum of 2,000 steps per day at least five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. Participants then engaged remotely with a clinical research coordinator to establish a “walking plan” that included a time and day interval (e.g., between 7am -8am on Monday-Friday), a location for the walk (e.g., park), and how much time the plan would need to achieve a minimum of 2,000 steps over their baseline average step count (20 minutes). All elements of the walking plan were required to remain consistent throughout the study. In the 10-week intervention phase of the study, participants received all five BCTs in a single, time-sensitive text message. The text message was delivered 30 minutes prior to the established walk time and signaled participants to begin walking when the context (i.e., day of the week, time of day) they pre-selected is encountered. Participants were asked to log both the start and conclusion of their activity session by pressing a button on the Fitbit tracker. Within 30 minutes of the planned conclusion of the activity session, participants received a text message with a link to a secure survey. Here, they noted whether they engaged in the BCTs when prompted and completed an automaticity survey to identify how “automatic” their walking behavior had become. At the end of their trial, participants received a personalized summary of their step data over time, helping them learn more about their responses to the five BCTs. Whether the use of these 5 ‘habit formation’ BCTs in combination elicits a change in physical activity behavior among older adults will be reported. This study will inform the feasibility of a virtually-delivered N-of-1 study design to effectively promote physical activity as a component of healthy aging.Keywords: aging, exercise, habit, walking
Procedia PDF Downloads 140993 Utilizing the Principal Component Analysis on Multispectral Aerial Imagery for Identification of Underlying Structures
Authors: Marcos Bosques-Perez, Walter Izquierdo, Harold Martin, Liangdon Deng, Josue Rodriguez, Thony Yan, Mercedes Cabrerizo, Armando Barreto, Naphtali Rishe, Malek Adjouadi
Abstract:
Aerial imagery is a powerful tool when it comes to analyzing temporal changes in ecosystems and extracting valuable information from the observed scene. It allows us to identify and assess various elements such as objects, structures, textures, waterways, and shadows. To extract meaningful information, multispectral cameras capture data across different wavelength bands of the electromagnetic spectrum. In this study, the collected multispectral aerial images were subjected to principal component analysis (PCA) to identify independent and uncorrelated components or features that extend beyond the visible spectrum captured in standard RGB images. The results demonstrate that these principal components contain unique characteristics specific to certain wavebands, enabling effective object identification and image segmentation.Keywords: big data, image processing, multispectral, principal component analysis
Procedia PDF Downloads 178992 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 325991 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method.Keywords: continual learning, data reconstruction, remote sensing, hyperspectral image segmentation
Procedia PDF Downloads 268990 Using the Dokeos Platform for Industrial E-Learning Solution
Authors: Kherafa Abdennasser
Abstract:
The application of Information and Communication Technologies (ICT) to the training area led to the creation of this new reality called E-learning. That last one is described like the marriage of multi- media (sound, image and text) and of the internet (diffusion on line, interactivity). Distance learning became an important totality for training and that last pass in particular by the setup of a distance learning platform. In our memory, we will use an open source platform named Dokeos for the management of a distance training of GPS called e-GPS. The learner is followed in all his training. In this system, trainers and learners communicate individually or in group, the administrator setup and make sure of this system maintenance.Keywords: ICT, E-learning, learning plate-forme, Dokeos, GPS
Procedia PDF Downloads 478989 Rendering Religious References in English: Naguib Mahfouz in the Arabic as a Foreign Language Classroom
Authors: Shereen Yehia El Ezabi
Abstract:
The transition from the advanced to the superior level of Arabic proficiency is widely known to pose considerable challenges for English speaking students of Arabic as a Foreign Language (AFL). Apart from the increasing complexity of the grammar at this juncture, together with the sprawling vocabulary, to name but two of those challenges, there is also the somewhat less studied hurdle along the way to superior level proficiency, namely, the seeming opacity of many aspects of Arab/ic culture to such learners. This presentation tackles one specific dimension of such issues: religious references in literary texts. It illustrates how carefully constructed translation activities may be used to expand and deepen students’ understanding and use of them. This is shown to be vital for making the leap to the desired competency, given that such elements, as reflected in customs, traditions, institutions, worldviews, and formulaic expressions lie at the very core of Arabic culture and, as such, pervade all modes and levels of Arabic discourse. A short story from the collection “Stories from Our Alley”, by preeminent novelist Naguib Mahfouz is selected for use in this context, being particularly replete with such religious references, of which religious expressions will form the focus of the presentation. As a miniature literary work, it provides an organic whole, so to speak, within which to explore with the class the most precise denotation, as well as the subtlest connotation of each expression in an effort to reach the ‘best’ English rendering. The term ‘best’ refers to approximating the meaning in its full complexity from the source text, in this case Arabic, to the target text, English, according to the concept of equivalence in translation theory. The presentation will show how such a process generates the sort of thorough discussion and close text analysis which allows students to gain valuable insight into this central idiom of Arabic. A variety of translation methods will be highlighted, gleaned from the presenter’s extensive work with advanced/superior students in the Center for Arabic Study Abroad (CASA) program at the American University in Cairo. These begin with the literal rendering of expressions, with the purpose of reinforcing vocabulary learning and practicing the rules of derivational morphology as they form each word, since the larger context remains that of an AFL class, as opposed to a translation skills program. However, departures from the literal approach are subsequently explored by degrees, moving along the spectrum of functional and pragmatic freer translations in order to transmit the ‘real’ meaning in readable English to the target audience- no matter how culture/religion specific the expression- while remaining faithful to the original. Samples from students’ work pre and post discussion will be shared, demonstrating how class consensus is formed as to the final English rendering, proposed as the closest match to the Arabic, and shown to be the result of the above activities. Finally, a few examples of translation work which students have gone on to publish will be shared to corroborate the effectiveness of this teaching practice.Keywords: superior level proficiency in Arabic as a foreign language, teaching Arabic as a foreign language, teaching idiomatic expressions, translation in foreign language teaching
Procedia PDF Downloads 199988 The Responsible Lending Principle in the Spanish Proposal of the Mortgage Credit Act
Authors: Noelia Collado-Rodriguez
Abstract:
The Mortgage Credit Directive 2014/17/UE should have been transposed the 21st of March of 2016. However, in Spain not only we did not meet the deadline, but currently we just have a preliminary draft of the so-called Mortgage Credit Act. Before we analyze the preliminary draft from the standpoint of the responsible lending principle, we should point out that this preliminary draft is not a consumer law statute. Through the text of the preliminary draft we cannot see any reference to the consumer, but we see references to the borrower. Furthermore, and more important, the application of this statute would not be, according to its text, circumscribed to borrowers who address the credit to a personal purpose. Instead, it seems that the preliminary draft aims to be one more of the rules of banking transparency that already exists in the Spanish legislation. In this sense, we can also mention that the sanctions contained in the preliminary draft are referred to these laws of banking ordination and oversight – where the rules of banking transparency belong –. This might be against the spirit of the Mortgage Credit Directive, which allows the extension of its scope to credits aimed to acquire other immovable property beyond the residential one. However, the borrower has to be a consumer accordingly with the Directive. It is quite relevant that the prospective Spanish Mortgage Credit Act might not be a consumer protection statute; specially, from the perspective of the responsible lending principle. The responsible lending principle is a consumer law principle, which is based on the structural weakness of the consumer’s position in the relationship with the creditor. Therefore, it cannot surprise that the Spanish preliminary draft does not state any of the pre contractual conducts that express the responsible lending principle. We are referring to the lender’s duty to provide adequate explanations; the consumer’s suitability test; the lender’s duty to assess consumer’s creditworthiness; the consultation of databases to perform the creditworthiness assessment; and the most important, the lender’s prohibition to grant credit in case of a negative creditworthiness assessment. The preliminary draft just entitles the Economy Ministry to enact provisions related to those topics. Thus, the duties and rules derived from the responsible lending principle included in the EU Directive will not have legal character in Spain, being mere administrative regulations. To conclude, the two main questions that come up after reading the Spanish Mortgage Credit Act preliminary draft are, in the first place, what kind of consequences might arise from the Mortgage Credit Act if finally it is not a consumer law statute. And in the second place, what might be the consequences for the responsible lending principle of being developed by administrative regulations instead of by legislation.Keywords: consumer credit, consumer protection, creditworthiness assessment, responsible lending
Procedia PDF Downloads 290987 Some Considerations about the Theory of Spatial-Motor Thinking Applied to a Traditional Fife Band in Brazil
Authors: Murilo G. Mendes
Abstract:
This text presents part of the results presented in the Ph.D. thesis that has used John Baily's theory and method as well as its ethnographic application in the context of the fife flutes of the Banda Cabaçal dos Irmãos Aniceto in the state of Ceará, northeast of Brazil. John Baily is a British ethnomusicologist dedicated to studying the relationships between music, musical gesture, and embodied cognition. His methodology became a useful tool to highlight historical-social aspects present in the group's instrumental music. Remaining indigenous and illiterate, these musicians played and transmitted their music from generation to generation, for almost two hundred years, without any nomenclature or systematization of the fingering performed on the flute. In other words, his music, free from any theorization, is learned, felt, perceived, and processed directly through hearing and through the relationship between the instrument's motor skills and its sound result. For this reason, Baily's assumptions became fundamental in the analysis processes. As the author's methodology recommends, classes were held with the natives and provided technical musical learning and some important concepts. Then, transcriptions and analyses of musical aspects were made from patterns of movement on the instrument incorporated by repetitions and/or by the intrinsic facility of the instrument. As a result, it was discovered how the group reconciled its indigenous origins with the demand requested by the public power and the interests of the local financial elite from the mid-twentieth century. The article is structured from the cultural context of the group, where local historical and social aspects influence the social and musical practices of the group. Then, will be present the methodological conceptions of John Baily and, finally, their application in the music of the Irmãos Aniceto. The conclusion points to the good results of identifying, through this methodology and analysis, approximations between discourse, historical-social factors, and musical text. Still, questions are raised about its application in other contexts.Keywords: Banda Cabaçal dos Irmãos Aniceto, John Baily, pífano, spatial-motor thinking
Procedia PDF Downloads 137986 Existential Feeling in Contemporary Chinese Novels: The Case of Yan Lianke
Authors: Thuy Hanh Nguyen Thi
Abstract:
Since 1940, existentialism has penetrated into China and continued to profoundly influence contemporary Chinese literature. By the method of deep reading and text analysis, this article analyzes the existential feeling in Yan Lianke’s novels through various aspects: the Sisyphus senses, the narrative rationalization and the viewpoint of the dead. In addition to pointing out the characteristics of the existential sensation in the writer’s novels, the analysis of the article also provides an insight into the nature and depth of contemporary Chinese society.Keywords: Yan Lianke, existentialism, the existential feeling, contemporary Chinese literature
Procedia PDF Downloads 141985 To Allow or to Forbid: Investigating How Europeans Reason about Endorsing Rights to Minorities: A Vignette Methodology Based Cross-Cultural Study
Authors: Silvia Miele, Patrice Rusconi, Harriet Tenenbaum
Abstract:
An increasingly multi-ethnic Europe has been pushing citizens’ boundaries on who should be entitled and to what extent to practise their own diversity. Indeed, according to a Standard Eurobarometer survey conducted in 2017, immigration is seen by Europeans as the most serious issue facing the EU, and a third of respondents reported they do not feel comfortable interacting with migrants from outside the EU. Many of these come from Muslim countries, accounting for 4.9% of Europe population in 2016. However, the figure is projected to rise up to 14% by 2050. Additionally, political debates have increasingly focused on Muslim immigrants, who are frequently portrayed as difficult to integrate, while nationalist parties across Europe have fostered the idea of insuperable cultural differences, creating an atmosphere of hostility. Using a 3 X 3 X 2 between-subjects design, it was investigated how people reason about endorsing religious and non-religious rights to minorities. An online survey has been administered to university students of three different countries (Italy, Spain and the UK) via Qualtrics, presenting hypothetical scenarios through a vignette methodology. Each respondent has been randomly allocated to one of the three following conditions: Christian, Muslim or non-religious (vegan) target. Each condition entailed three questions about children self-determination rights to exercise some control over their own lives and 3 questions about children nurturance rights of care and protection. Moreover, participants have been required to further elaborate on their answers via free-text entries and have been asked about their contact and quality of contact with the three targets, and to self-report religious, national and ethnic identification. Answers have been recorded on a Likert scale of 1-5, 1 being "not at all", 5 being "very much". A two-way ANCOVA will be used to analyse answers to closed-ended questions, while free-text answers will be coded and data will be dichotomised based on Social Cognitive Domain Theory for four categories: moral, social conventional and psychological reasons, and analysed via ANCOVAs. This study’s findings aim to contribute to the implementation of educational interventions and speak to the introduction of governmental policies on human rights.Keywords: children's rights, Europe, migration, minority
Procedia PDF Downloads 131984 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 94983 DCT and Stream Ciphers for Improved Image Encryption Mechanism
Authors: T. R. Sharika, Ashwini Kumar, Kamal Bijlani
Abstract:
Encryption is the process of converting crucial information’s unreadable to unauthorized persons. Image security is an important type of encryption that secures all type of images from cryptanalysis. A stream cipher is a fast symmetric key algorithm which is used to convert plaintext to cipher text. In this paper we are proposing an image encryption algorithm with Discrete Cosine Transform and Stream Ciphers that can improve compression of images and enhanced security. The paper also explains the use of a shuffling algorithm for enhancing securing.Keywords: decryption, DCT, encryption, RC4 cipher, stream cipher
Procedia PDF Downloads 363