Search results for: image encryption algorithms
933 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment
Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan
Abstract:
With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.Keywords: data sharing, cross-domain, data exchange, publish-subscribe
Procedia PDF Downloads 124932 The Imminent Other in Anna Deavere Smith’s Performance
Authors: Joy Shihyi Huang
Abstract:
This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance
Procedia PDF Downloads 155931 Personality Profiles, Emotional Disturbance and Health-Related Quality of Life in Patients with Epilepsy
Authors: Usha Barahmand, Ruhollah Heydari Sheikh Ahmad, Sara Alaie Khoraem
Abstract:
Introduction: The association of epilepsy with several psychological disorders and reduced quality of life has long been recognized. The present study aimed at comparing the personality profiles, quality of life and symptomatology of anxiety and depression in patients with epilepsy and healthy controls. Materials and Methods: Forty seven patients (29 men and 18 women) with diagnosed epilepsy participated in this study. Forty seven healthy controls who matched the patients in age and gender were also recruited. The participants’ personality and psychological profiles were assessed using the Depression, Anxiety, and Stress Scale (DASS-21), the Short-Form Health Survey (SF-36) and the HEXACO Personality Inventory (HEXACO-PI). Scoring algorithms were applied to the SF-36 produce the physical and mental component scores (PCS and MCS). Results: There were statistically significant differences in the total SF-36 score, anxiety, depression and stress scores of the DASS-21 between patients and controls. Anxiety, stress and depression scores significantly correlated inversely with the PCS and MCS. Data analysis showed that females had higher depression scores than males in both patients and controls, while males in both groups scored higher on stress. Patients’ personality scores were also different from those reported by controls on emotional, agreeableness and extroversion. Patients scored higher on emotionality, and lower on agreeableness and extraversion. Patients also scored lower on indices of quality of life. Regression analysis revealed that emotionality, anxiety, stress and MCS accounted for a significant proportion of the variance in severity of epileptic seizures. Conclusion: Stressful situations and psychological conditions as well as the personality trait of neuroticism were related to the occurrence of recurrent epileptic seizures.Keywords: anxiety, depression, epilepsy, neuroticism, personality, quality of life, stress
Procedia PDF Downloads 370930 Importance of Developing a Decision Support System for Diagnosis of Glaucoma
Authors: Murat Durucu
Abstract:
Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.Keywords: decision support system, glaucoma, image processing, pattern recognition
Procedia PDF Downloads 302929 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 79928 Terrorism: Impact on Nigeria’s Foreign Policy, 1999-2015
Authors: Omolaja Akolade Oluwaseunfunmi
Abstract:
This study seeks to ascertain the origin and history of terrorism in Nigeria, determine the causes of terrorism in Nigeria, examine Nigeria’s foreign policies from 1999 to 2015, evaluate how terrorist groups like Boko Haram and the Indigenous People of Biafra (IPOB) have affected Nigeria’s foreign policies in the international arena; ascertain the measures taken by the government in tackling terrorist acts in Nigeria and give recommendations on how to tackle this menace. The methodology used in this research is the analytical method. The study derives its data from both primary and secondary sources. Findings from fieldwork showed that terrorism has also become one of the most important fundamentals of Nigeria’s foreign policies and relations; respondents from the people interviewed showed that terrorism is a menace and that terrorism must be adequately tackled in other to achieve Nigeria’s foreign policy. Furthermore, results revealed that the fight against the scourge has increasingly gained legitimacy and justification among the international community particularly as many countries consider it to be their international obligation to support the global movement to ameliorate or eliminate the menace. In conclusion, this research made, among other recommendations, that the Nigerian government should ensure the provision of a good life for its citizens, the inter-connectivity of terrorist organizations must be defeated, the government should undergo a foreign policy drive designed at rebuilding its image in the international environment, and also the promotion of peace education among various government, religious institutions, private sector, and civil society groups should be encouraged.Keywords: foreign policy, Boko Haram, movement for the emancipation of Niger delta (MEND), terrorism
Procedia PDF Downloads 26927 Evolving Mango Metaphor In Diaspora Literature: Maintaining Immigrant Identity Through Foodways
Authors: Constance Kirker
Abstract:
This paper examines examples of the shared use of mango references as a culinary metaphor powerful in maintaining immigrant identity in the works of diaspora authors from a variety of regions of the world, including South Asia, the Caribbean, and Africa, and across a variety of genres, including novels, culinary memoirs, and children’s books. There has been past criticism of so-called sari-mango literature, suggesting that use of the image of mango is a cliché, even “lazy,” attempt to “exoticize” and sentimentalize South Asia in particular. A broader review across national boundaries reveals that diaspora authors, including those beyond South Asia, write nostalgically about mango as much about the messy “full body” tactile experience of eating a mango as about the “exotic” quality of mango representing the “otherness” of their home country. Many of the narratives detail universal childhood food experiences that are more shared than exotic, such as a desire to subvert the adult societal rules of neatness and get very messy, or memories of small but memorable childhood transgressions such as stealing mangoes from a neighbor’s tree. In recent years, food technology has evolved, and mangoes have become more familiar and readily available in Europe and America, from smoothies and baby food to dried fruit snacks. The meaning associated with the imagery of mangoes for both writers and readers in diaspora literature evolves as well, and authors do not have to heed Salman Rushdie’s command, “There must be no tropical fruits in the title. No mangoes.”Keywords: identity, immigrant diaspora, culinary metaphor, food studies
Procedia PDF Downloads 111926 Numerical and Experimental Analysis of Stiffened Aluminum Panels under Compression
Authors: Ismail Cengiz, Faruk Elaldi
Abstract:
Within the scope of the study presented in this paper, load carrying capacity and buckling behavior of a stiffened aluminum panel designed by adopting current ‘buckle-resistant’ design application and ‘Post –Buckling’ design approach were investigated experimentally and numerically. The test specimen that is stabilized by Z-type stiffeners and manufactured from aluminum 2024 T3 Clad material was test under compression load. Buckling behavior was observed by means of 3 – dimensional digital image correlation (DIC) and strain gauge pairs. The experimental study was followed by developing an efficient and reliable finite element model whose ability to predict behavior of the stiffened panel used for compression test is verified by compering experimental and numerical results in terms of load – shortening curve, strain-load curves and buckling mode shapes. While finite element model was being constructed, non-linear behaviors associated with material and geometry was considered. Finally, applicability of aluminum stiffened panel in airframe design against to composite structures was evaluated thorough the concept of ‘Structural Efficiency’. This study reveals that considerable amount of weight saving could be gained if the concept of ‘post-buckling design’ is preferred to the already conventionally used ‘buckle resistant design’ concept in aircraft industry without scarifying any of structural integrity under load spectrum.Keywords: post-buckling, stiffened panel, non-linear finite element method, aluminum, structural efficiency
Procedia PDF Downloads 148925 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 45924 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.Keywords: block matching, digital evidence, hash list, evaluation of digital evidence
Procedia PDF Downloads 255923 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon
Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn
Abstract:
The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.Keywords: land use and land cover change, change detection, image processing, support vector machines
Procedia PDF Downloads 139922 Electrospun Alginate Nanofibers Containing Spirulina Extract Double-Layered with Polycaprolactone Nanofibers
Authors: Seon Yeong Byeon, Hwa Sung Shin
Abstract:
Nanofibrous sheets are of interest in the beauty industries due to the properties of moisturizing, adhesion to skin and delivery of nutrient materials. The benefit and function of the cosmetic products should not be considered without safety thus a non-toxic manufacturing process is ideal when fabricating the products. In this study, we have developed cosmetic patches consisting of alginate and Spirulina extract, a marine resource which has antibacterial and antioxidant effects, without addition of harmful cross-linkers. The patches obtained their structural stabilities by layer-upon-layer electrospinning of an alginate layer on a formerly spread polycaprolactone (PCL) layer instead of crosslinking method. The morphological characteristics, release of Spirulina extract, water absorption, skin adhesiveness and cytotoxicity of the double-layered patches were assessed. The image of scanning electron microscopy (SEM) showed that the addition of Spirulina extract has made the fiber diameter of alginate layers thinner. Impregnation of Spirulina extract increased their hydrophilicity, moisture absorption ability and skin adhesive ability. In addition, wetting the pre-dried patches resulted in releasing the Spirulina extract within 30 min. The patches were detected to have no cytotoxicity in the human keratinocyte cell-based MTT assay, but rather showed increased cell viability. All the results indicate the bioactive and hydro-adhesive double-layered patches have an excellent applicability to bioproducts for personal skin care in the trend of ‘A mask pack a day’.Keywords: alginate, cosmetic patch, electrospun nanofiber, polycaprolactone, Spirulina extract
Procedia PDF Downloads 347921 Behavior, Temperament and Food Intake of Urban Indian Adolescents
Authors: Preeti Khanna, Bani T. Aeri
Abstract:
Background: Recent studies have indicated challenges that hamper health and wellbeing of a vast majority of adolescents in developing countries. Many modifiable factors like behavior and temperament related to food intake among adolescents have not been adequately explored. The aim of the proposed research is to study the impact of behavior and temperament on food intake and diet quality of adolescents. Objectives: In the present study data on dietary behavior and anthropometry of adolescent boys & girls (aged 13-16 years) studying in public schools of Delhi will be gathered to ascertain the quality of diet among adolescent boys and girls and to study the effect of behavior and temperament on diet quality of adolescents. Methods: In total, 400 adolescents will participate in this cross-sectional study. Weight and height of adolescents will be measured and BMI will be calculated. Information will be obtained on their socio-demographic profile and various factors influencing their Food Choices and diet quality such as body image perception, Behavior, temperament, locus of control and parental influence. Expected results: Several direct effects of adolescent traits and behavior on food intake will be observed. Maturational patterns and gender differences in behavior traits will be assessed. By profiling of the behavior and temperament traits, we will have a better understanding of impact of these factors on weight and eating behaviors in overweight/obese or even underweight adolescents. Conclusions: The proposed study will highlight the association of behavioral factors with nutritional status of adolescents. It will also serve as a strategic approach for the obesity prevention and health management policies designed for adolescents.Keywords: behaviour, temperament, food intake, adolescents
Procedia PDF Downloads 243920 Assimilating Remote Sensing Data Into Crop Models: A Global Systematic Review
Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam
Abstract:
Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.Keywords: crop models, remote sensing, data assimilation, crop yield estimation
Procedia PDF Downloads 131919 Assimilating Remote Sensing Data into Crop Models: A Global Systematic Review
Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam
Abstract:
Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.Keywords: crop models, remote sensing, data assimilation, crop yield estimation
Procedia PDF Downloads 82918 Static Application Security Testing Approach for Non-Standard Smart Contracts
Authors: Antonio Horta, Renato Marinho, Raimir Holanda
Abstract:
Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.Keywords: blockchain, reentrancy, static application security testing, smart contracts
Procedia PDF Downloads 88917 From Dissection to Diagnosis: Integrating Radiology into Anatomy Labs for Medical Students
Authors: Julia Wimmers-Klick
Abstract:
At the Canadian University of British Columbia's Faculty of Medicine, anatomy has traditionally been taught through a combination of lectures and dissection labs in the first two years, with radiology taught separately through lectures and online modules. However, this separation may leave students underprepared for medical practice, as medical imaging is essential for diagnosing anatomical and pathological conditions. To address this, a pilot project was initiated aimed at integrating radiological imaging into anatomy dissection labs from day one of medical school. The incorporated radiological images correlated with the current dissection areas. Additional stations were added within the lab, tailored to the specific content being covered. These stations focused on bones, and quiz questions, along with light-box exercises using radiographs, CT scans, and MRIs provided by the radiology department. The images used were free of pathologies. Examples of these will be presented in the poster. Feedback from short interviews with students and instructors has been positive, particularly among second-year students who appreciated the integration compared to their first-year experience. This low-budget approach was easy to implement but faced challenges, as lab instructors were not radiologists and occasionally struggled to answer students' questions. Instructors expressed a desire for basic training or a refresher course in radiology image reading, particularly focused on identifying healthy landmarks. Overall, all participants agreed that integrating radiology with anatomy reinforces learning during dissection, enhancing students' understanding and preparation for clinical practice.Keywords: quality improvement, radiology education, anatomy education, integration
Procedia PDF Downloads 11916 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 318915 Stigma Associated with Invisible Disabilities and Its Effect on Intended Disclosure in the Workplace
Authors: Jessica Lynne Hicksted
Abstract:
Disability discrimination is a long-standing issue that, despite protections, continues to result in unemployment, underemployment, and lack of advancement for disabled persons. Visible stigma is researched substantially; however, less is known about the impact of stigma associated with identities that can be concealed. Although researchers have investigated this issue, currently there is no tool to measure this phenomenon. The purpose of this quantitative study was to create and validate a new tool to measure stigma associated with invisible disabilities. The study is grounded by Roberts’ conceptual model of professional image construction integrating social identity, impression management, and organizational behavior; Meisenbach’s stigma management communication theory addressing the vulnerabilities and resilience to stigma communication by focusing on how individuals encounter and react to perceived stigmas; and Kelley and Michela’s causal attribution theory. Participants included 1,412 adults in the United States 18 years or older currently employed or who have been employed within the last 5 years. Confirmatory factor analysis of the new Workplace Invisible Disabilities Experience scale showed excellent fit of the factor structure to the data, X₂/df = 1.855, CFI = .955, RMSEA = .045, p = .0001. The scale has three subscales, Ableism, Advocacy, and Acceptance, with excellent internal consistency reliability. Total score, Advocacy, and Acceptance were associated with intention to disclose. Implications for positive social change include helping organizations to understand the extent of invisible disability stigma that can help improve workplace performance and satisfaction.Keywords: invisible disabilities, accommodations, acceptance, social change, workplace inclusion
Procedia PDF Downloads 70914 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling
Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo
Abstract:
Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling
Procedia PDF Downloads 298913 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186912 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 129911 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer
Authors: Mahya Naghipoor
Abstract:
Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.Keywords: lung cancer, radiomics, computer tomography, mutation
Procedia PDF Downloads 167910 Design of an Acoustic Imaging Sensor Array for Mobile Robots
Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta
Abstract:
Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming
Procedia PDF Downloads 409909 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection
Procedia PDF Downloads 325908 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled
Authors: Rishabh Ambavanekar
Abstract:
Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis
Procedia PDF Downloads 119907 Physical Characteristics of Locally Composts Produced in Saudi Arabia and the Need for Regulations
Authors: Ahmad Al-Turki
Abstract:
Composting is the suitable way of recycling organic waste for agricultural application and environment protection. In Saudi Arabia, several composting facilities are available and producing high quantity of composts. The aim of this study is to evaluate the physical characteristics of composts manufactured in Saudi Arabia and acquire a comprehensive image of its quality through the comparative with international standards of compost quality such as CCQC and PAS-100. In the present study different locally produced compost were identified and most of the producing factories were visited during the manufacturing of composts. Representative samples of different compost production stage were collected and Physical characteristics were determined, which included moisture content, bulk density, percentage of sand and the size of distribution of the compost particles. Results showed wide variations in all parameters investigated. Results of the study indicated generally that there is a wide variation in the physical characteristics of the types of compost under study. The initial moister contents in composts were generally low, it was less than 60% in most samples and not sufficient for microbial activities for biodegradation in 96% of the 96% of the types of compost and this will impede the decomposition of organic materials. The initial bulk density values ranged from 117 gL-1 to 1110.0 gL-1, while the final apparent bulk density ranged from 340.0 gL-1 to 1000gL-1 and about 45.4 % did not meet the ideal bulk density value. Sand percents in composts were between 3.3 % and 12.5%. This study has confirmed the need for a standard specification for compost manufactured in Saudi Arabia for agricultural use based on international standards for compost and soil characteristics and climatic conditions in Saudi Arabia.Keywords: compost, maturity, Saudi Arabia, organic material
Procedia PDF Downloads 349906 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector
Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu
Abstract:
In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis
Procedia PDF Downloads 206905 Design of Replication System for Computer-Generated Hologram in Optical Component Application
Authors: Chih-Hung Chen, Yih-Shyang Cheng, Yu-Hsin Tu
Abstract:
Holographic optical elements (HOEs) have recently been one of the most suitable components in optoelectronic technology owing to the requirement of the product system with compact size. Computer-generated holography (CGH) is a well-known technology for HOEs production. In some cases, a well-designed diffractive optical element with multifunctional components is also an important issue and needed for an advanced optoelectronic system. Spatial light modulator (SLM) is one of the key components that has great capability to display CGH pattern and is widely used in various applications, such as an image projection system. As mentioned to multifunctional components, such as phase and amplitude modulation of light, high-resolution hologram with multiple-exposure procedure is also one of the suitable candidates. However, holographic recording under multiple exposures, the diffraction efficiency of the final hologram is inevitably lower than that with single exposure process. In this study, a two-step holographic recording method, including the master hologram fabrication and the replicated hologram production, will be designed. Since there exist a reduction factor M² of diffraction efficiency in multiple-exposure holograms (M multiple exposures), so it seems that single exposure would be more efficient for holograms replication. In the second step of holographic replication, a stable optical system with one-shot copying is introduced. For commercial application, one may utilize this concept of holographic copying to obtain duplications of HOEs with higher optical performance.Keywords: holographic replication, holography, one-shot copying, optical element
Procedia PDF Downloads 156904 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 133