Search results for: intrinsic image representation
807 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 103806 Comparative Study Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine
Procedia PDF Downloads 409805 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer
Authors: Mahya Naghipoor
Abstract:
Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.Keywords: lung cancer, radiomics, computer tomography, mutation
Procedia PDF Downloads 167804 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes
Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi
Abstract:
Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes
Procedia PDF Downloads 403803 Design of an Acoustic Imaging Sensor Array for Mobile Robots
Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta
Abstract:
Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming
Procedia PDF Downloads 409802 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection
Procedia PDF Downloads 325801 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled
Authors: Rishabh Ambavanekar
Abstract:
Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis
Procedia PDF Downloads 119800 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 143799 Physical Characteristics of Locally Composts Produced in Saudi Arabia and the Need for Regulations
Authors: Ahmad Al-Turki
Abstract:
Composting is the suitable way of recycling organic waste for agricultural application and environment protection. In Saudi Arabia, several composting facilities are available and producing high quantity of composts. The aim of this study is to evaluate the physical characteristics of composts manufactured in Saudi Arabia and acquire a comprehensive image of its quality through the comparative with international standards of compost quality such as CCQC and PAS-100. In the present study different locally produced compost were identified and most of the producing factories were visited during the manufacturing of composts. Representative samples of different compost production stage were collected and Physical characteristics were determined, which included moisture content, bulk density, percentage of sand and the size of distribution of the compost particles. Results showed wide variations in all parameters investigated. Results of the study indicated generally that there is a wide variation in the physical characteristics of the types of compost under study. The initial moister contents in composts were generally low, it was less than 60% in most samples and not sufficient for microbial activities for biodegradation in 96% of the 96% of the types of compost and this will impede the decomposition of organic materials. The initial bulk density values ranged from 117 gL-1 to 1110.0 gL-1, while the final apparent bulk density ranged from 340.0 gL-1 to 1000gL-1 and about 45.4 % did not meet the ideal bulk density value. Sand percents in composts were between 3.3 % and 12.5%. This study has confirmed the need for a standard specification for compost manufactured in Saudi Arabia for agricultural use based on international standards for compost and soil characteristics and climatic conditions in Saudi Arabia.Keywords: compost, maturity, Saudi Arabia, organic material
Procedia PDF Downloads 348798 Ethnic-Racial Breakdown in Psychological Research among Latinx Populations in the U.S.
Authors: Madeline Phillips, Luis Mendez
Abstract:
The 21st century has seen an increase in the amount and variety of psychological research on Latinx, the largest minority group in the U.S., with great variability from the individual’s cultural origin (e.g., ethnicity) to region (e.g., nationality). We were interested in exploring how scientists recruit, conduct and report research on Latinx samples. Ethnicity and race are important components of individuals and should be addressed to capture a broader and deeper understanding of psychological research findings. In order to explore Latinx/Hispanic work, the Journal of Latinx Psychology (JLP) and Hispanic Journal of Behavioral Sciences (HJBS) were analyzed for 1) measures of ethnicity and race in empirical studies 2) nationalities represented 3) how researchers reported ethnic-racial demographics. The analysis included publications from 2013-2018 and revealed two common themes of reporting ethnicity and race: overrepresentation/underrepresentation and overgeneralization. There is currently not a systematic way of reporting ethnicity and race among Latinx/Hispanic research, creating a vague sense of what and how ethnicity/race plays a role in the lives of participants. Second, studies used the Hispanic/Latinx terms interchangeably and are not consistent across publications. For the purpose of this project, we were only interested in publications with Latinx samples in the U.S. Therefore, studies outside of the U.S. and non-empirical studies were excluded. JLP went from N = 118 articles to N = 94 and HJBS went from N = 174 to N = 154. For this project, we developed a coding rubric for ethnicity/race that reflected the different ways researchers reported ethnicity and race and was compatible with the U.S. census. We coded which ethnicity/race was identified as the largest ethnic group in each sample. We used the ethnic-racial breakdown numbers or percentages if provided. There were also studies that simply did not report the ethnic composition besides Hispanic or Latinx. We found that in 80% of the samples, Mexicans are overrepresented compared to the population statistics of Latinx in the US. We observed all the ethnic-racial breakdowns, demonstrating the overrepresentation of Mexican samples and underrepresentation and/or lack of representation of certain ethnicities (e.g., Chilean, Guatemalan). Our results showed an overgeneralization of studies that cluster their participants to Latinx/Hispanic, 23 for JLP and 63 for HJBS. The authors discuss the importance of transparency from researchers in reporting the context of the sample, including country, state, neighborhood, and demographic variables that are relevant to the goals of the project, except when there may be an issue of privacy and/or confidentiality involved. In addition, the authors discuss the importance to recognize the variability within the Latinx population and how it is reflected in the scientific discourse.Keywords: Latinx, Hispanic, race and ethnicity, diversity
Procedia PDF Downloads 114797 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector
Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu
Abstract:
In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis
Procedia PDF Downloads 205796 Comparative Study of Static and Dynamic Representations of the Family Structure and Its Clinical Utility
Authors: Marietta Kékes Szabó
Abstract:
The patterns of personality (mal)function and the individuals’ psychosocial environment influence the healthy status collectively and may lie in the background of psychosomatic disorders. Although the patients with their diversified symptoms usually do not have any organic problems, the experienced complaint, the fear of serious illness and the lack of social support often lead to increased anxiety and further enigmatic symptoms. The role of the family system and its atmosphere seem to be very important in this process. More studies explored the characteristics of dysfunctional family organization: inflexible family structure, hidden conflicts that are not spoken about by the family members during their daily interactions, undefined role boundaries, neglect or overprotection of the children by the parents and coalition between generations. However, questionnaires that are used to measure the properties of the family system are able to explore only its unit and cannot pay attention to the dyadic interactions, while the representation of the family structure by a figure placing test gives us a new perspective to better understand the organization of the (sub)system(s). Furthermore, its dynamic form opens new perspectives to explore the family members’ joint representations, which gives us the opportunity to know more about the flexibility of cohesion and hierarchy of the given family system. In this way, the communication among the family members can be also examined. The aim of my study was to collect a great number of information about the organization of psychosomatic families. In our research we used Gehring’s Family System Test (FAST) both in static and dynamic forms to mobilize the family members’ mental representations about their family and to get data in connection with their individual representations as well as cooperation. There were four families in our study, all of them with a young adult person. Two families with healthy participants and two families with asthmatic patient(s) were involved in our research. The family members’ behavior that could be observed during the dynamic situation was recorded on video for further data analysis with Noldus Observer XT 8.0 program software. In accordance with the previous studies, our results show that the family structure of the families with at least one psychosomatic patient is more rigid than it was found in the control group and the certain (typical, ideal, and conflict) dynamic representations reflected mainly the most dominant family member’s individual concept. The behavior analysis also confirmed the intensified role of the dominant person(s) in the family life, thereby influencing the family decisions, the place of the other family members, as well as the atmosphere of the interactions, which could also be grasped well by the applied methods. However, further research is needed to learn more about the phenomenon that can open the door for new therapeutic approaches.Keywords: psychosomatic families, family structure, family system test (FAST), static and dynamic representations, behavior analysis
Procedia PDF Downloads 391795 Design of Replication System for Computer-Generated Hologram in Optical Component Application
Authors: Chih-Hung Chen, Yih-Shyang Cheng, Yu-Hsin Tu
Abstract:
Holographic optical elements (HOEs) have recently been one of the most suitable components in optoelectronic technology owing to the requirement of the product system with compact size. Computer-generated holography (CGH) is a well-known technology for HOEs production. In some cases, a well-designed diffractive optical element with multifunctional components is also an important issue and needed for an advanced optoelectronic system. Spatial light modulator (SLM) is one of the key components that has great capability to display CGH pattern and is widely used in various applications, such as an image projection system. As mentioned to multifunctional components, such as phase and amplitude modulation of light, high-resolution hologram with multiple-exposure procedure is also one of the suitable candidates. However, holographic recording under multiple exposures, the diffraction efficiency of the final hologram is inevitably lower than that with single exposure process. In this study, a two-step holographic recording method, including the master hologram fabrication and the replicated hologram production, will be designed. Since there exist a reduction factor M² of diffraction efficiency in multiple-exposure holograms (M multiple exposures), so it seems that single exposure would be more efficient for holograms replication. In the second step of holographic replication, a stable optical system with one-shot copying is introduced. For commercial application, one may utilize this concept of holographic copying to obtain duplications of HOEs with higher optical performance.Keywords: holographic replication, holography, one-shot copying, optical element
Procedia PDF Downloads 156794 Nigerian Media Coverage of the Chibok Girls Kidnap: A Qualitative News Framing Analysis of the Nation Newspaper
Authors: Samuel O. Oduyela
Abstract:
Over the last ten years, many studies have examined the media coverage of terrorism across the world. Nevertheless, most of these studies have been inclined to the western narrative, more so in relation to the international media. This study departs from that partiality to explore the Nigerian press and its coverage of the Boko Haram. The study intends to illustrate how the Nigerian press has reported its homegrown terrorism within its borders. On 14 April 2014, the Shekau-led Boko Haram kidnapped over 200 female students from Chibok in the Borno State. This study analyses a structured sample of news stories, feature articles, editorial comments, and opinions from the Nation newspaper. The study examined the representation of the Chibok girls kidnaps by concentrating on four main viewpoints. The news framing of the Chibok girls’ kidnap under Presidents Goodluck Jonathan (2014) and Mohammadu Buhari (2016-2018), the sourcing model present in the news reporting of the kidnap and the challenges Nation reporters face in reporting Boko Haram. The study adopted the use of qualitative news framing analysis to provide further insights into significant developments established from the examination of news contents. The study found that the news reportage mainly focused on the government response to Chibok girls kidnap, international press and Boko Haram. Boko Haram was also framed, as a political conspiracy, as prevailing, and as instilling fear. Political, and economic influence appeared to be a significant determinant of the reportage. The study found that the Nation newspaper's portrayal of the crisis under President Jonathan differed significantly from under President Buhari. While the newspaper framed the action of President Jonathan as lacklustre, dismissive, and confusing, it was less critical of President Buhari's government's handling of the crisis. The Nation newspaper failed to promote or explore non-violent approaches. News reports of the kidnap, thus, were presented mainly from a political and ethnoreligious perspective. The study also raised questions of what roles should journalists play in covering conflicts? Should they merely report comments on and interpret it, or should they be actors in the resolution or, more importantly, the prevention of conflicts? The study underlined the need for the independence of the media, more training for journalists to advance a more nuanced and conflict-sensitive news coverage in the Nigerian context.Keywords: boko haram, chibok girls kidnap, conflict in nigeria, media framing
Procedia PDF Downloads 148793 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 207792 Purchasing Decision-Making in Supply Chain Management: A Bibliometric Analysis
Authors: Ahlem Dhahri, Waleed Omri, Audrey Becuwe, Abdelwahed Omri
Abstract:
In industrial processes, decision-making ranges across different scales, from process control to supply chain management. The purchasing decision-making process in the supply chain is presently gaining more attention as a critical contributor to the company's strategic success. Given the scarcity of thorough summaries in the prior studies, this bibliometric analysis aims to adopt a meticulous approach to achieve quantitative knowledge on the constantly evolving subject of purchasing decision-making in supply chain management. Through bibliometric analysis, we examine a sample of 358 peer-reviewed articles from the Scopus database. VOSviewer and Gephi software were employed to analyze, combine, and visualize the data. Data analytic techniques, including citation network, page-rank analysis, co-citation, and publication trends, have been used to identify influential works and outline the discipline's intellectual structure. The outcomes of this descriptive analysis highlight the most prominent articles, authors, journals, and countries based on their citations and publications. The findings from the research illustrate an increase in the number of publications, exhibiting a slightly growing trend in this field. Co-citation analysis coupled with content analysis of the most cited articles identified five research themes mentioned as follows integrating sustainability into the supplier selection process, supplier selection under disruption risks assessment and mitigation strategies, Fuzzy MCDM approaches for supplier evaluation and selection, purchasing decision in vendor problems, decision-making techniques in supplier selection and order lot sizing problems. With the help of a graphic timeline, this exhaustive map of the field illustrates a visual representation of the evolution of publications that demonstrate a gradual shift from research interest in vendor selection problems to integrating sustainability in the supplier selection process. These clusters offer insights into a wide variety of purchasing methods and conceptual frameworks that have emerged; however, they have not been validated empirically. The findings suggest that future research would emerge with a greater depth of practical and empirical analysis to enrich the theories. These outcomes provide a powerful road map for further study in this area.Keywords: bibliometric analysis, citation analysis, co-citation, Gephi, network analysis, purchasing, SCM, VOSviewer
Procedia PDF Downloads 85791 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 133790 Artificial Intelligence Based Abnormality Detection System and Real Valuᵀᴹ Product Design
Authors: Junbeom Lee, Jaehyuck Cho, Wookyeong Jeong, Jonghan Won, Jungmin Hwang, Youngseok Song, Taikyeong Jeong
Abstract:
This paper investigates and analyzes meta-learning technologies that use multiple-cameras to monitor and check abnormal behavior in people in real-time in the area of healthcare fields. Advances in artificial intelligence and computer vision technologies have confirmed that cameras can be useful for individual health monitoring and abnormal behavior detection. Through this, it is possible to establish a system that can respond early by automatically detecting abnormal behavior of the elderly, such as patients and the elderly. In this paper, we use a technique called meta-learning to analyze image data collected from cameras and develop a commercial product to determine abnormal behavior. Meta-learning applies machine learning algorithms to help systems learn and adapt quickly to new real data. Through this, the accuracy and reliability of the abnormal behavior discrimination system can be improved. In addition, this study proposes a meta-learning-based abnormal behavior detection system that includes steps such as data collection and preprocessing, feature extraction and selection, and classification model development. Various healthcare scenarios and experiments analyze the performance of the proposed system and demonstrate excellence compared to other existing methods. Through this study, we present the possibility that camera-based meta-learning technology can be useful for monitoring and testing abnormal behavior in the healthcare area.Keywords: artificial intelligence, abnormal behavior, early detection, health monitoring
Procedia PDF Downloads 86789 Need for Policy and Legal Framework for Caste Based Atrocities as Violation of International Human Rights in View of Indian Diaspora
Authors: Vijayalaxmi Khopade
Abstract:
The Prima facie caste system is intrinsic to Indian society. It is an ancient system of intense social stratification based upon birth and enjoying religious sanction. The uppermost strata and privileges are ascribed and enjoyed by brahmins (priestly class), while the lowest strata are occupied by Dalits who are not ascribed with any privileges. The caste system is inherently hierarchical, patriarchal, and systematic and thrives solely on exploitation justified through means of the Brahminical system of hegemony based singularly on birth. The caste system has extended its tentacles to other religions like Christianity, Buddhism, Jainism, and Islam in South Asia. Term Dalit is colloquially used to categorize persons belonging to lower strata in the caste hierarchy. However, this category is heterogenous and highly stratified, following practices like untouchability and exclusion amongst themselves. The modern Indian legal system acknowledges the existence of Caste and its perils. Therefore, by virtue of the Indian Constitution, provisions for affirmative action for the protection and development of Dalits are made. Courts in India have liberally interpreted laws to benefit Dalits. However, the modern system of governance is not immune from Caste based biases. These biases are reflected in the implementation of governance, including the dispensation of justice. The economic reforms of the 1990s gave a huge boost to the Indian diaspora. Persons of Indian origin are now seen making great strides in almost every sector and enjoying positions of power globally. As one peels off the layer of ethnic Indian origin, a deep seated layer of Caste and Caste based patriarchy is clearly visible. Indian diaspora enjoying positions of power essentially belongs to upper castes and carry Caste based biases with them. These castes have long enjoyed the benefits of education; therefore, they were the first ones to benefit from LPG (Liberalization, Privatization, Globalization) model adopted in the 1990s. Dalits, however, had little formal education until recently. The western legal system, to the best of our knowledge, does not recognize Caste and, therefore, cannot afford protection for Dalits, wherein discrimination and exploitation take place solely on the basis of Caste. Therefore, Dalits are left with no legal remedy outside domestic jurisdiction. Countries like the UK have made an attempt to include Caste in their Equality Bill 2010. This has met with tough resistance from Upper caste Hindus who shy away from recognizing their caste privileges and, therefore, the existence of Caste. In this paper, an attempt for comparative analysis is made between various legal protections accorded to Dalits in India vis-à-vis international human rights as protected by the United Nations under its declaration of Universal Human rights. An attempt has been made to mark a distinction between race and Caste and to establish a position of women in Caste based hierarchy. The paper also makes an argument for the inclusion of atrocities committed against Dalits as a violation of international human rights, their protection by the United Nations, and the trial of their violations by International Courts. The paper puts into perspective the need for an external agency like the United Nations and International courts to interfere in rights guaranteed by the Indian Constitution, even with the existence of a modern legal system in a sovereign democratic country.Keywords: atrocity, caste, diaspora, legal framework
Procedia PDF Downloads 215788 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 423787 Analysis of Coloring Styles of Brazilian Urban Heritage
Authors: Natalia Naoumova
Abstract:
Facing changes and continuous growth of the contemporary cities, along with the globalization effects that accelerate cultural dissolution, the maintenance of cultural authenticity, which is implicit in historical areas as a part of cultural diversity, can be considered one of the key elements of a sustainable society. This article focuses on the polychromy of buildings in a historical context as an important feature of urban settings. It analyses the coloring of Brazilian urban heritage, characterized by the study of historical districts in Pelotas and Piratini, located in the State of Rio Grande do Sul, Brazil. The objective is to reveal the coloring characteristics of different historical periods, determine the chromatic typologies of the corresponding building styles, and clarify the connection between the historical chromatic aspects and their relationship with the contemporary urban identity. Architectural style data were collected by different techniques such as stratigraphic prospects of buildings, survey of historical records and descriptions, analysis of images and study of projects with colored facades kept in historical archives. Three groups of characteristics were considered in searching for working criteria in the formation of chromatic model typologies: 1) coloring palette; 2) morphology of the facade, and 3) their relationship. The performed analysis shows the formation of the urban chromatic image of the historical center as a continuous and dynamic process with the development of constant chromatic resources. It establishes that the changes in the formal language of subsequent historical periods lead to the changes in the chromatic schemes, providing a different reading of the facades both in terms of formal interpretation and symbolic meaning.Keywords: building style, historic colors, urban heritage, urban polychromy
Procedia PDF Downloads 142786 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 186785 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index
Procedia PDF Downloads 157784 Time to CT in Major Trauma in Coffs Harbour Health Campus - The Australian Rural Centre Experience
Authors: Thampi Rawther, Jack Cecire, Andrew Sutherland
Abstract:
Introduction: CT facilitates the diagnosis of potentially life-threatening injuries and facilitates early management. There is evidence that reduced CT acquisition time reduces mortality and length of hospital stay. Currently, there are variable recommendations for ideal timing. Indeed, the NHS standard contract for a major trauma service and STAG both recommend immediate access to CT within a maximum time of 60min and appropriate reporting within 60min of the scan. At Coffs Harbour Health Campus (CHHC), a CT radiographer is on site between 8am-11pm. Aim: To investigate the average time to CT at CHHC and assess for any significant relationship between time to CT and injury severity score (ISS) or time of triage. Method: All major trauma calls between Jan 2021-Oct 2021 were audited (N=87). Patients were excluded if they went from ED to the theatre. Time to CT is defined as the time between triage to the timestamp on the first CT image. Median and interquartile range was used as a measure of central tendency as the data was not normally distributed, and Chi-square test was used to determine association. Results: The median time to CT is 51.5min (IQR 40-74). We found no relationship between time to CT and ISS (P=0.18) and time of triage to time to CT (P=0.35). We compared this to other centres such as John Hunter Hospital and Gold Coast Hospital. We found that the median CT acquisition times were 76min (IQR 52-115) and 43min, respectively. Conclusion: This shows an avenue for improvement given 35% of CT’s were >30min. Furthermore, being proactive and aware of time to CT as an important factor to trauma management can be another avenue for improvement. Based on this, we will re-audit in 12-24months to assess if any improvement has been made.Keywords: imaging, rural surgery, trauma surgery, improvement
Procedia PDF Downloads 102783 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 240782 From Plate to Self-Perception: Unravelling the Interplay Between Food Security and Self-Esteem Among Malaysian University Students
Authors: Amiraa Ali Mansor, Haslinda Abdullah, Angela Chan Nguk Fong, Norhaida Hanim Binti Ahmad Tajudin, Asnarulkhadi Abu Samah
Abstract:
Obesity has risen sharply over the past three decades, posing a grave public health concern globally. In Malaysia, it has also emerged as a significant health threat. While the second Sustainable Development Goal, "Zero Hunger", aims to ensure equitable access to nutritious food for all, a key challenge lies in addressing food insecurity. Food insecurity not only pertains to the quantity but also the quality of food, with both dimensions playing a pivotal role in health outcomes. To date, much of the research on food security has focused on household levels. There remains a research gap concerning university students, a population transitioning to independence from parental support and grappling with limited resources. This study seeks to bridge this gap by extending the Food Security Theory to incorporate the psychological dimension of self-esteem. Using a quantitative approach, data was collected from 452 public university students in Malaysia through a cross-sectional research design and a multi-stage cluster sampling technique. The anticipated findings will provide novel insights by linking food security with self-esteem. Such insights have implications for healthcare policy and the framing of preventive strategies against obesity. It is hoped that this research will not only contribute to the academic discourse on Food Security Theory but also serve as a foundation for refining national health policies and programs aimed at fostering a healthier lifestyle.Keywords: obesity, food security, body image, self-esteem
Procedia PDF Downloads 76781 Society and Cinema in Iran
Authors: Seyedeh Rozhano Azimi Hashemi
Abstract:
There is no doubt that ‘Art’ is a social phenomena and cinema is the most social kind of art. Hence, it’s clear that we can analyze the relation’s of cinema and art from different aspects. In this paper sociological cinema will be investigated which, is a subdivision of sociological art. This term will be discussed by two main approaches. One of these approaches is focused on the effects of cinema on the society, which is known as “Effects Theory” and the second one, which is dealing with the reflection of social issues in cinema is called ” Reflection Theory”. "Reflect theory" approach, unlike "Effects theory" is considering movies as documents, in which social life is reflected, and by analyzing them, the changes and tendencies of a society are understood. Criticizing these approaches to cinema and society doesn’t mean that they are not real. Conversely, it proves the fact that for better understanding of cinema and society’s relation, more complicated models are required, which should consider two aspects. First, they should be bilinear and they should provide a dynamic and active relation between cinema and society, as for the current concept social life and cinema have bi-linear effects on each other, and that’s how they fit in a dialectic and dynamic process. Second, it should pay attention to the role of inductor elements such as small social institutions, marketing, advertisements, cultural pattern, art’s genres and popular cinema in society. In the current study, image of middle class in cinema of Iran and changing the role of women in cinema and society which were two bold issue that cinema and society faced since 1979 revolution till 80s are analyzed. Films as an artwork on one hand, are reflections of social changes and with their effects on the society on the other hand, are trying to speed up the trends of these changes. Cinema by the illustration of changes in ideologies and approaches in exaggerated ways and through it’s normalizing functions, is preparing the audiences and public opinions for the acceptance of these changes. Consequently, audience takes effect from this process, which is a bi-linear and interactive process.Keywords: Iranian Cinema, Cinema and Society, Middle Class, Woman’s Role
Procedia PDF Downloads 340780 IoT-Based Early Identification of Guava (Psidium guajava) Leaves and Fruits Diseases
Authors: Daudi S. Simbeye, Mbazingwa E. Mkiramweni
Abstract:
Plant diseases have the potential to drastically diminish the quantity and quality of agricultural products. Guava (Psidium guajava), sometimes known as the apple of the tropics, is one of the most widely cultivated fruits in tropical regions. Monitoring plant health and diagnosing illnesses is an essential matter for sustainable agriculture, requiring the inspection of visually evident patterns on plant leaves and fruits. Due to minor variations in the symptoms of various guava illnesses, a professional opinion is required for disease diagnosis. Due to improper pesticide application by farmers, erroneous diagnoses may result in economic losses. This study proposes a method that uses artificial intelligence (AI) to detect and classify the most widespread guava plant by comparing images of its leaves and fruits to datasets. ESP32 CAM is responsible for data collection, which includes images of guava leaves and fruits. By comparing the datasets, these image formats are used as datasets to help in the diagnosis of plant diseases through the leaves and fruits, which is vital for the development of an effective automated agricultural system. The system test yielded the most accurate identification findings (99 percent accuracy in differentiating four guava fruit diseases (Canker, Mummification, Dot, and Rust) from healthy fruit). The proposed model has been interfaced with a mobile application to be used by smartphones to make a quick and responsible judgment, which can help the farmers instantly detect and prevent future production losses by enabling them to take precautions beforehand.Keywords: early identification, guava plants, fruit diseases, deep learning
Procedia PDF Downloads 76779 Impact of Self-Concept on Performance and Mental Wellbeing of Preservice Teachers
Authors: José María Agugusto-landa, Inmaculada García-Martínez, Lara Checa Domene, Óscar Gavín Chocano
Abstract:
Self-concept is the perception that a person has of himself, of his abilities, skills, traits, and values. Self-concept is composed of different dimensions, such as academic self-concept, physical self-concept, social self-concept, emotional self-concept, and family self-concept. The relationship between the dimensions of self-concept and mental health and academic performance among future teachers is a topic of interest for educational psychology. Some studies have found that: (i) There is a positive relationship between general self-concept, academic self-concept and academic performance, that is, students who have a more positive image of themselves tend to get better grades and be more motivated to learn. (ii) There is a positive relationship between emotional intelligence, physical self-concept and healthy habits, that is, students who regulate and understand their emotions better have a higher satisfaction with their physical appearance and follow a more balanced diet and a higher physical activity. As for gender differences in the dimensions of self-concept among future teachers, some studies have found that: (i) Girls tend to have a higher self-concept in the social, family and verbal dimensions, that is, they perceive themselves as more capable of relating to others, communicating effectively and receiving support from their family. (ii) Boys tend to have a higher self-concept in the physical, emotional and mathematical dimensions, that is, they perceive themselves as more capable of performing physical activities, controlling their emotions and solving mathematical problems. (iii) There are no significant differences between general self-concept and academic self-concept according to gender, that is, both girls and boys have a similar perception of their global worth and academic competence.Keywords: preservice teachers, self-concept, academic performance, mental wellbeing
Procedia PDF Downloads 79778 Psychosocial Determinants of Quality of Life After Treatment For Colorectal Cancer - A Systematic Review
Authors: Lakmali Anthony, Madeline Gillies
Abstract:
Purpose: Long-term survivorship in colorectal cancer (CRC) is increasing as mortality decreases, leading to increased focus on patient-reported outcomes such as quality of life (QoL). CRC patients often have decreased QoL even after treatment is complete. This systematic review of the literature aims to identify psychosocial factors associated with decreased QoL in post-treatment CRC patients. Methodology: This systematic review was performed in accordance with the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations. The search was conducted in MEDLINE, EMBASE, and PsychINFO using MeSH headings. The two authors screened studies for relevance and extracted data. Results: Seventeen studies were identified, including 6,272 total participants (mean = 392, 58% male) with a mean age of 60.6 years. The European Organisation for Research and Treatment of Cancer QLQ-C30 was the most common measure of QoL (n=14, 82.3%). Most studies (n=15, 88.2%) found that emotional distress correlated with poor global QoL. This was most commonly measured with the Hospital Anxiety & Depression Scale (n=11, 64.7%). Other psychosocial factors associated with QoL were lack of social support, body image, and financial difficulties. Clinicopathologic determinants included presence of stoma and metastasis. Conclusion: This systematic review provides a summary of the psychosocial determinants of poor QoL in post-treatment CRC patients, as well as the most commonly reported measures of these. An understanding of these potentially modifiable determinants of poor outcome is pivotal to the provision of quality, patient-centred care in surgical oncology.Keywords: colorectal cancer, cancer surgery, quality of life, oncology, social determinants
Procedia PDF Downloads 89