Search results for: image encryption algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4605

Search results for: image encryption algorithms

915 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 74
914 The Influence of Surface Roughness on the Flow Fields Generated by an Oscillating Cantilever

Authors: Ciaran Conway, Nick Jeffers, Jeff Punch

Abstract:

With the current trend of miniaturisation of electronic devices, piezoelectric fans have attracted increasing interest as an alternative means of forced convection over traditional rotary solutions. Whilst there exists an abundance of research on various piezo-actuated flapping fans in the literature, the geometries of these fans all consist of a smooth rectangular cross section with thicknesses typically of the order of 100 um. The focus of these studies is primarily on variables such as frequency, amplitude, and in some cases resonance mode. As a result, the induced flow dynamics are a direct consequence of the pressure differential at the fan tip as well as the pressure-driven ‘over the top’ vortices generated at the upper and lower edges of the fan. Rough surfaces such as golf ball dimples or vortex generators on an aircraft wing have proven to be beneficial by tripping the boundary layer and energising the adjacent air flow. This paper aims to examine the influence of surface roughness on the airflow generation of a flapping fan and determine whether the induced wake can be manipulated or enhanced by energising the airflow around the fan tip. Particle Image Velocimetry (PIV) is carried out on mechanically oscillated rigid fans with various surfaces consisting of pillars, perforations and cell-like grids derived from the wing topology of natural fliers. The results of this paper may be used to inform the design of piezoelectric fans and possibly aid in understanding the complex aerodynamics inherent in flapping wing flight.

Keywords: aerodynamics, oscillating cantilevers, PIV, vortices

Procedia PDF Downloads 214
913 One Nature under God, and Divisible: Augustine’s “Duality of Man” Applied to the Creation Stories of Genesis

Authors: Elizabeth Latham

Abstract:

The notion that women were created as innately inferior to men has yet to be expelled completely from the theological system of humankind. This question and the biblical exegesis it requires are of paramount importance to feminist philosophy—after all, the study can bear little fruit if we cannot even agree on equality within the theological roots of humanity. Augustine’s “Duality of Man” gives new context to the two creation stories in Genesis, texts especially relevant given the billions of people worldwide that ascribe to them as philosophical realities. Each creation story describes the origin of human beings and is matched with one of Augustine’s two orders of mankind. The first story describes the absolute origin of the human soul and is paired with Augustine’s notion of the “spiritual order” of a human being: divine and eternal, fulfilling the biblical idea that human beings were created in the image and likeness of God. The second creation story, in contrast, depicts those aspects of humanity that distinguish and separate us from God: doubt, fear, and sin. It also introduces gender as a concept for the first time in the Bible. This story is better matched with Augustine’s idea of the “natural order” of humanity, that by which he believes women, in fact, are inferior. In the synthesis of the two sources, one can see that the natural order and any inferiority that it implies are incidental and not intended in our creation. Gender inequality is introduced with and belongs in the category of human imperfection and to cite the Bible as encouraging it constitutes a gross misunderstanding of scripture. This is easy to see when we divide human nature into “spiritual” and “natural” and look carefully at where scripture falls.

Keywords: augustine, bible, duality of man, feminism, genesis

Procedia PDF Downloads 134
912 The Imminent Other in Anna Deavere Smith’s Performance

Authors: Joy Shihyi Huang

Abstract:

This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.

Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance

Procedia PDF Downloads 150
911 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 295
910 Evolving Mango Metaphor In Diaspora Literature: Maintaining Immigrant Identity Through Foodways

Authors: Constance Kirker

Abstract:

This paper examines examples of the shared use of mango references as a culinary metaphor powerful in maintaining immigrant identity in the works of diaspora authors from a variety of regions of the world, including South Asia, the Caribbean, and Africa, and across a variety of genres, including novels, culinary memoirs, and children’s books. There has been past criticism of so-called sari-mango literature, suggesting that use of the image of mango is a cliché, even “lazy,” attempt to “exoticize” and sentimentalize South Asia in particular. A broader review across national boundaries reveals that diaspora authors, including those beyond South Asia, write nostalgically about mango as much about the messy “full body” tactile experience of eating a mango as about the “exotic” quality of mango representing the “otherness” of their home country. Many of the narratives detail universal childhood food experiences that are more shared than exotic, such as a desire to subvert the adult societal rules of neatness and get very messy, or memories of small but memorable childhood transgressions such as stealing mangoes from a neighbor’s tree. In recent years, food technology has evolved, and mangoes have become more familiar and readily available in Europe and America, from smoothies and baby food to dried fruit snacks. The meaning associated with the imagery of mangoes for both writers and readers in diaspora literature evolves as well, and authors do not have to heed Salman Rushdie’s command, “There must be no tropical fruits in the title. No mangoes.”

Keywords: identity, immigrant diaspora, culinary metaphor, food studies

Procedia PDF Downloads 105
909 Assimilating Remote Sensing Data Into Crop Models: A Global Systematic Review

Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam

Abstract:

Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.

Keywords: crop models, remote sensing, data assimilation, crop yield estimation

Procedia PDF Downloads 121
908 Assimilating Remote Sensing Data into Crop Models: A Global Systematic Review

Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam

Abstract:

Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.

Keywords: crop models, remote sensing, data assimilation, crop yield estimation

Procedia PDF Downloads 78
907 Numerical and Experimental Analysis of Stiffened Aluminum Panels under Compression

Authors: Ismail Cengiz, Faruk Elaldi

Abstract:

Within the scope of the study presented in this paper, load carrying capacity and buckling behavior of a stiffened aluminum panel designed by adopting current ‘buckle-resistant’ design application and ‘Post –Buckling’ design approach were investigated experimentally and numerically. The test specimen that is stabilized by Z-type stiffeners and manufactured from aluminum 2024 T3 Clad material was test under compression load. Buckling behavior was observed by means of 3 – dimensional digital image correlation (DIC) and strain gauge pairs. The experimental study was followed by developing an efficient and reliable finite element model whose ability to predict behavior of the stiffened panel used for compression test is verified by compering experimental and numerical results in terms of load – shortening curve, strain-load curves and buckling mode shapes. While finite element model was being constructed, non-linear behaviors associated with material and geometry was considered. Finally, applicability of aluminum stiffened panel in airframe design against to composite structures was evaluated thorough the concept of ‘Structural Efficiency’. This study reveals that considerable amount of weight saving could be gained if the concept of ‘post-buckling design’ is preferred to the already conventionally used ‘buckle resistant design’ concept in aircraft industry without scarifying any of structural integrity under load spectrum.

Keywords: post-buckling, stiffened panel, non-linear finite element method, aluminum, structural efficiency

Procedia PDF Downloads 143
906 Static Application Security Testing Approach for Non-Standard Smart Contracts

Authors: Antonio Horta, Renato Marinho, Raimir Holanda

Abstract:

Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.

Keywords: blockchain, reentrancy, static application security testing, smart contracts

Procedia PDF Downloads 86
905 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 36
904 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 313
903 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: block matching, digital evidence, hash list, evaluation of digital evidence

Procedia PDF Downloads 250
902 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 126
901 Electrospun Alginate Nanofibers Containing Spirulina Extract Double-Layered with Polycaprolactone Nanofibers

Authors: Seon Yeong Byeon, Hwa Sung Shin

Abstract:

Nanofibrous sheets are of interest in the beauty industries due to the properties of moisturizing, adhesion to skin and delivery of nutrient materials. The benefit and function of the cosmetic products should not be considered without safety thus a non-toxic manufacturing process is ideal when fabricating the products. In this study, we have developed cosmetic patches consisting of alginate and Spirulina extract, a marine resource which has antibacterial and antioxidant effects, without addition of harmful cross-linkers. The patches obtained their structural stabilities by layer-upon-layer electrospinning of an alginate layer on a formerly spread polycaprolactone (PCL) layer instead of crosslinking method. The morphological characteristics, release of Spirulina extract, water absorption, skin adhesiveness and cytotoxicity of the double-layered patches were assessed. The image of scanning electron microscopy (SEM) showed that the addition of Spirulina extract has made the fiber diameter of alginate layers thinner. Impregnation of Spirulina extract increased their hydrophilicity, moisture absorption ability and skin adhesive ability. In addition, wetting the pre-dried patches resulted in releasing the Spirulina extract within 30 min. The patches were detected to have no cytotoxicity in the human keratinocyte cell-based MTT assay, but rather showed increased cell viability. All the results indicate the bioactive and hydro-adhesive double-layered patches have an excellent applicability to bioproducts for personal skin care in the trend of ‘A mask pack a day’.

Keywords: alginate, cosmetic patch, electrospun nanofiber, polycaprolactone, Spirulina extract

Procedia PDF Downloads 341
900 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling

Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo

Abstract:

Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.

Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling

Procedia PDF Downloads 292
899 Behavior, Temperament and Food Intake of Urban Indian Adolescents

Authors: Preeti Khanna, Bani T. Aeri

Abstract:

Background: Recent studies have indicated challenges that hamper health and wellbeing of a vast majority of adolescents in developing countries. Many modifiable factors like behavior and temperament related to food intake among adolescents have not been adequately explored. The aim of the proposed research is to study the impact of behavior and temperament on food intake and diet quality of adolescents. Objectives: In the present study data on dietary behavior and anthropometry of adolescent boys & girls (aged 13-16 years) studying in public schools of Delhi will be gathered to ascertain the quality of diet among adolescent boys and girls and to study the effect of behavior and temperament on diet quality of adolescents. Methods: In total, 400 adolescents will participate in this cross-sectional study. Weight and height of adolescents will be measured and BMI will be calculated. Information will be obtained on their socio-demographic profile and various factors influencing their Food Choices and diet quality such as body image perception, Behavior, temperament, locus of control and parental influence. Expected results: Several direct effects of adolescent traits and behavior on food intake will be observed. Maturational patterns and gender differences in behavior traits will be assessed. By profiling of the behavior and temperament traits, we will have a better understanding of impact of these factors on weight and eating behaviors in overweight/obese or even underweight adolescents. Conclusions: The proposed study will highlight the association of behavioral factors with nutritional status of adolescents. It will also serve as a strategic approach for the obesity prevention and health management policies designed for adolescents.

Keywords: behaviour, temperament, food intake, adolescents

Procedia PDF Downloads 239
898 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 175
897 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 123
896 Stigma Associated with Invisible Disabilities and Its Effect on Intended Disclosure in the Workplace

Authors: Jessica Lynne Hicksted

Abstract:

Disability discrimination is a long-standing issue that, despite protections, continues to result in unemployment, underemployment, and lack of advancement for disabled persons. Visible stigma is researched substantially; however, less is known about the impact of stigma associated with identities that can be concealed. Although researchers have investigated this issue, currently there is no tool to measure this phenomenon. The purpose of this quantitative study was to create and validate a new tool to measure stigma associated with invisible disabilities. The study is grounded by Roberts’ conceptual model of professional image construction integrating social identity, impression management, and organizational behavior; Meisenbach’s stigma management communication theory addressing the vulnerabilities and resilience to stigma communication by focusing on how individuals encounter and react to perceived stigmas; and Kelley and Michela’s causal attribution theory. Participants included 1,412 adults in the United States 18 years or older currently employed or who have been employed within the last 5 years. Confirmatory factor analysis of the new Workplace Invisible Disabilities Experience scale showed excellent fit of the factor structure to the data, X₂/df = 1.855, CFI = .955, RMSEA = .045, p = .0001. The scale has three subscales, Ableism, Advocacy, and Acceptance, with excellent internal consistency reliability. Total score, Advocacy, and Acceptance were associated with intention to disclose. Implications for positive social change include helping organizations to understand the extent of invisible disability stigma that can help improve workplace performance and satisfaction.

Keywords: invisible disabilities, accommodations, acceptance, social change, workplace inclusion

Procedia PDF Downloads 63
895 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 165
894 Design of an Acoustic Imaging Sensor Array for Mobile Robots

Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.

Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming

Procedia PDF Downloads 402
893 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique

Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong

Abstract:

Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.

Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection

Procedia PDF Downloads 321
892 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled

Authors: Rishabh Ambavanekar

Abstract:

Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.

Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis

Procedia PDF Downloads 115
891 Physical Characteristics of Locally Composts Produced in Saudi Arabia and the Need for Regulations

Authors: Ahmad Al-Turki

Abstract:

Composting is the suitable way of recycling organic waste for agricultural application and environment protection. In Saudi Arabia, several composting facilities are available and producing high quantity of composts. The aim of this study is to evaluate the physical characteristics of composts manufactured in Saudi Arabia and acquire a comprehensive image of its quality through the comparative with international standards of compost quality such as CCQC and PAS-100. In the present study different locally produced compost were identified and most of the producing factories were visited during the manufacturing of composts. Representative samples of different compost production stage were collected and Physical characteristics were determined, which included moisture content, bulk density, percentage of sand and the size of distribution of the compost particles. Results showed wide variations in all parameters investigated. Results of the study indicated generally that there is a wide variation in the physical characteristics of the types of compost under study. The initial moister contents in composts were generally low, it was less than 60% in most samples and not sufficient for microbial activities for biodegradation in 96% of the 96% of the types of compost and this will impede the decomposition of organic materials. The initial bulk density values ranged from 117 gL-1 to 1110.0 gL-1, while the final apparent bulk density ranged from 340.0 gL-1 to 1000gL-1 and about 45.4 % did not meet the ideal bulk density value. Sand percents in composts were between 3.3 % and 12.5%. This study has confirmed the need for a standard specification for compost manufactured in Saudi Arabia for agricultural use based on international standards for compost and soil characteristics and climatic conditions in Saudi Arabia.

Keywords: compost, maturity, Saudi Arabia, organic material

Procedia PDF Downloads 340
890 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia PDF Downloads 200
889 Design of Replication System for Computer-Generated Hologram in Optical Component Application

Authors: Chih-Hung Chen, Yih-Shyang Cheng, Yu-Hsin Tu

Abstract:

Holographic optical elements (HOEs) have recently been one of the most suitable components in optoelectronic technology owing to the requirement of the product system with compact size. Computer-generated holography (CGH) is a well-known technology for HOEs production. In some cases, a well-designed diffractive optical element with multifunctional components is also an important issue and needed for an advanced optoelectronic system. Spatial light modulator (SLM) is one of the key components that has great capability to display CGH pattern and is widely used in various applications, such as an image projection system. As mentioned to multifunctional components, such as phase and amplitude modulation of light, high-resolution hologram with multiple-exposure procedure is also one of the suitable candidates. However, holographic recording under multiple exposures, the diffraction efficiency of the final hologram is inevitably lower than that with single exposure process. In this study, a two-step holographic recording method, including the master hologram fabrication and the replicated hologram production, will be designed. Since there exist a reduction factor M² of diffraction efficiency in multiple-exposure holograms (M multiple exposures), so it seems that single exposure would be more efficient for holograms replication. In the second step of holographic replication, a stable optical system with one-shot copying is introduced. For commercial application, one may utilize this concept of holographic copying to obtain duplications of HOEs with higher optical performance.

Keywords: holographic replication, holography, one-shot copying, optical element

Procedia PDF Downloads 151
888 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 153
887 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 56
886 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang

Abstract:

Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.

Keywords: CNN, classification, deep learning, GAN, Resnet50

Procedia PDF Downloads 78