Search results for: Digital Image Correlation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8601

Search results for: Digital Image Correlation

7131 Correlation between the Ratios of House Dust Mite-Specific IgE/Total IgE and Asthma Control Test Score as a Biomarker of Immunotherapy Response Effectiveness in Pediatric Allergic Asthma Patients

Authors: Bela Siska Afrida, Wisnu Barlianto, Desy Wulandari, Ery Olivianto

Abstract:

Background: Allergic asthma, caused by IgE-mediated allergic reactions, remains a global health issue with high morbidity and mortality rates. Immunotherapy is the only etiology-based approach to treating asthma, but no standard biomarkers have been established to evaluate the therapy’s effectiveness. This study aims to determine the correlation between the ratios of serum levels of HDM-specific IgE/total IgE and Asthma Control Test (ACT) score as a biomarker of the response to immunotherapy in pediatric allergic asthma patients. Patient and Methods: This retrospective cohort study involved 26 pediatric allergic asthma patients who underwent HDM-specific subcutaneous immunotherapy for 14 weeks at the Pediatric Allergy Immunology Outpatient Clinic at Saiful Anwar General Hospital, Malang. Serum levels of HDM-Specific IgE and Total IgE were measured before and after immunotherapy using Chemiluminescence Immunoassay and Enzyme-linked Immunosorbent Assay (ELISA) method. Changes in asthma control were assessed using the ACT score. The Wilcoxon Signed Ranked Test and Spearman correlation test were used for data analysis. Results: There were 14 boys and 12 girls with a mean age of 6.48 ± 2.54 years. The study showed a significant decrease in serum HMD-specific levels before immunotherapy [9.88 ± 5.74 kuA/L] compared to those of 14 weeks after immunotherapy [4.51 ± 3.98 kuA/L], p = 0.000. Serum Total IgE levels significant decrease before immunotherapy [207.6 ± 120.8IU/ml] compared to those of 14 weeks after immunotherapy [109.83 ± 189.39 IU/mL], p = 0.000. The ratios of serum HDM-specific IgE/total IgE levels significant decrease before immunotherapy [0.063 ± 0.05] compared to those of 14 weeks after immunotherapy [0.041 ± 0.039], p = 0.012. There was also a significant increase in ACT scores before and after immunotherapy (each 15.5 ± 1.79 and 20.96 ± 2.049, p = 0.000). The correlation test showed a weak negative correlation between the ratios of HDM-specific IgE/total IgE levels and ACT score (p = 0.034 and r = -0.29). Conclusion: In conclusion, this study showed that a decrease in HDM-specific IgE levels, total IgE levels, and HDM-specific IgE/total IgE ratios, and an increase in ACT score, was observed after 14 weeks of HDM-specific subcutaneous immunotherapy. The weak negative correlation between the HDM-specific IgE/total IgE ratio and the ACT score suggests that this ratio can serve as a potential biomarker of the effectiveness of immunotherapy in treating pediatric allergic asthma patients.

Keywords: HDM-specific IgE/total IgE ratio, ACT score, immunotherapy, allergic asthma

Procedia PDF Downloads 48
7130 The Study of Rapid Entire Body Assessment and Quick Exposure Check Correlation in an Engine Oil Company

Authors: Mohammadreza Ashouria, Majid Motamedzadeb

Abstract:

Rapid Entire Body Assessment (REBA) and Quick Exposure Check (QEC) are two general methods to assess the risk factors of work-related musculoskeletal disorders (WMSDs). This study aimed to compare ergonomic risk assessment outputs from QEC and REBA in terms of agreement in distribution of postural loading scores based on analysis of working postures. This cross-sectional study was conducted in an engine oil company in which 40 jobs were studied. A trained occupational health practitioner observed all jobs. Job information was collected to ensure the completion of ergonomic risk assessment tools, including QEC, and REBA. The result revealed that there was a significant correlation between final scores (r=0.731) and the action levels (r =0.893) of two applied methods. Comparison between the action levels and final scores of two methods showed that there was no significant difference among working departments. Most of the studied postures acquired low and moderate risk level in QEC assessment (low risk=20%, moderate risk=50% and High risk=30%) and in REBA assessment (low risk=15%, moderate risk=60% and high risk=25%).There is a significant correlation between two methods. They have a strong correlation in identifying risky jobs and determining the potential risk for incidence of WMSDs. Therefore, there is a possibility for researchers to apply interchangeably both methods, for postural risk assessment in appropriate working environments.

Keywords: observational method, QEC, REBA, musculoskeletal disorders

Procedia PDF Downloads 347
7129 Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array

Authors: Hao Cheng, Zhiwu Wang, Guozheng Yan, Pingping Jiang, Shijia Qin, Shuai Kuang

Abstract:

Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm.

Keywords: AFI, edge detection, adaptive threshold, morphological processing, OV6930, FPGA

Procedia PDF Downloads 208
7128 Self-serving Anchoring of Self-judgments

Authors: Elitza Z. Ambrus, Bjoern Hartig, Ryan McKay

Abstract:

Individuals’ self-judgments might be malleable and influenced by comparison with a random value. On the one hand, self-judgments reflect our self-image, which is typically considered to be stable in adulthood. Indeed, people also strive hard to maintain a fixed, positive moral image of themselves. On the other hand, research has shown the robustness of the so-called anchoring effect on judgments and decisions. The anchoring effect refers to the influence of a previously considered comparative value (anchor) on a consecutive absolute judgment and reveals that individuals’ estimates of various quantities are flexible and can be influenced by a salient random value. The present study extends the anchoring paradigm to the domain of the self. We also investigate whether participants are more susceptible to self-serving anchors, i.e., anchors that enhance participant’s self-image, especially their moral self-image. In a pre-reregistered study via the online platform Prolific, 249 participants (156 females, 89 males, 3 other and 1 who preferred not to specify their gender; M = 35.88, SD = 13.91) ranked themselves on eight personality characteristics. However, in the anchoring conditions, respondents were asked to first indicate whether they thought they would rank higher or lower than a given anchor value before providing their estimated rank in comparison to 100 other anonymous participants. A high and a low anchor value were employed to differentiate between anchors in a desirable (self-serving) direction and anchors in an undesirable (self-diminishing) direction. In the control treatment, there was no comparison question. Subsequently, participants provided their self-rankings on the eight personality traits with two personal characteristics for each combination of the factors desirable/undesirable and moral/non-moral. We found evidence of an anchoring effect for self-judgments. Moreover, anchoring was more efficient when people were anchored in a self-serving direction: the anchoring effect was enhanced when supporting a more favorable self-view and mitigated (even reversed) when implying a deterioration of the self-image. The self-serving anchoring was more pronounced for moral than for non-moral traits. The data also provided evidence in support of a better-than-average effect in general as well as a magnified better-than-average effect for moral traits. Taken together, these results suggest that self-judgments might not be as stable in adulthood as previously thought. In addition, considerations of constructing and maintaining a positive self-image might interact with the anchoring effect on self-judgments. Potential implications of our results concern the construction and malleability of self-judgments as well as the psychological mechanism shaping anchoring.

Keywords: anchoring, better-than-average effect, self-judgments, self-serving anchoring

Procedia PDF Downloads 160
7127 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 60
7126 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 83
7125 Instructional Design Strategy Based on Stories with Interactive Resources for Learning English in Preschool

Authors: Vicario Marina, Ruiz Elena, Peredo Ruben, Bustos Eduardo

Abstract:

the development group of Educational Computing of the National Polytechnic (IPN) in Mexico has been developing interactive resources at preschool level in an effort to improve learning in the Child Development Centers (CENDI). This work describes both a didactic architecture and a strategy for teaching English with digital stories using interactive resources available through a Web repository designed to be used in mobile platforms. It will be accessible initially to 500 children and worldwide by the end of 2015.

Keywords: instructional design, interactive resources, digital educational resources, story based English teaching, preschool education

Procedia PDF Downloads 452
7124 Evaluation of the Digitalization in Graphic Design in Turkey

Authors: Veysel Seker

Abstract:

Graphic designing and virtual reality have been affected by digital development and technological development for the last decades. This study aims to compare and evaluate digitalization and virtual reality evaluation in traditional and classical methods of the graphic designing sector in Turkey. The qualitative and quantitative studies and research were discussed and identified according to the evaluated results of the literature surveys. Moreover, the study showed that the competency gap between graphic design schools and the field should be determined and well-studied. The competencies of traditional graphic designers will have a big challenge for the purpose of the transition into the developed and evaluated digital graphic design world.

Keywords: digitalization, evaluation, graphic designing, virtual reality

Procedia PDF Downloads 122
7123 The Need for Career Education Based on Self-Esteem in Japanese Youths

Authors: Kumiko Inagaki

Abstract:

Because of the rapidly changing social and industrial world, career education in Japan has recently gained in popularity with the government’s support. However, it has not fostered proactive mindsets and attitudes in the youths. This paper first provides a background of career education in Japan. Next, based on the International Survey of Youth Attitude, Japanese youths’ views of themselves and their future were identified and then compared to the views of youths in six other countries. Assessments of the feelings of self-satisfaction and future hopes of Japanese youths returned very low scores. Suggestions were offered on career education in order to promote a positive self-image.

Keywords: career education, self-esteem, self-image, youth attitude

Procedia PDF Downloads 465
7122 Feeling Sorry for Some Creditors

Authors: Hans Tjio, Wee Meng Seng

Abstract:

The interaction of contract and property has always been a concern in corporate and commercial law, where there are internal structures created that may not match the externally perceived image generated by the labels attached to those structures. We will focus, in particular, on the priority structures created by affirmative asset partitioning, which have increasingly come under challenge by those attempting to negotiate around them. The most prominent has been the AT1 bonds issued by Credit Suisse which were wiped out before its equity when the troubled bank was acquired by UBS. However, this should not have come as a surprise to those whose “bonds” had similarly been “redeemed” upon the occurrence of certain reference events in countries like Singapore, Hong Kong and Taiwan during their Minibond crisis linked to US sub-prime defaults. These were derivatives classified as debentures and sold as such. At the same time, we are again witnessing “liabilities” seemingly ranking higher up the balance sheet ladder, finding themselves lowered in events of default. We will examine the mechanisms holders of perpetual securities or preference shares have tried to use to protect themselves. This is happening against a backdrop that sees a rise in the strength of private credit and inter-creditor conflicts. The restructuring regime of the hybrid scheme in Singapore now, while adopting the absolute priority rule in Chapter 11 as the quid pro quo for creditor cramdown, does not apply to shareholders and so exempts them from cramdown. Complicating the picture further, shareholders are not exempted from cramdown in the Dutch scheme, but it adopts a relative priority rule. At the same time, the important UK Supreme Court decision in BTI 2014 LLC v Sequana [2022] UKSC 25 has held that directors’ duties to take account of creditor interests are activated only when a company is almost insolvent. All this has been complicated by digital assets created by businesses. Investors are quite happy to have them classified as property (like a thing) when it comes to their transferability, but then when the issuer defaults to have them seen as a claim on the business (as a choice in action), that puts them at the level of a creditor. But these hidden interests will not show themselves on an issuer’s balance sheet until it is too late to be considered and yet if accepted, may also prevent any meaningful restructuring.

Keywords: asset partitioning, creditor priority, restructuring, BTI v Sequana, digital assets

Procedia PDF Downloads 62
7121 A Challenge of the 3ʳᵈ Millenium: The Emotional Intelligence Development

Authors: Florentina Hahaianu, Mihaela Negrescu

Abstract:

The analysis of the positive and negative effects of technology use and abuse in Generation Z comes as a necessity in order to understand their ever-changing emotional development needs. The article quantitatively analyzes the findings of a sociological questionnaire on a group of students in social sciences. It aimed to identify the changes generated by the use of digital resources in the emotional intelligence development. Among the outcomes of our study we include a predilection for IT related activities – be they social, learning, entertainment, etc. which undermines the manifestation of emotional intelligence, especially the reluctance to face-to-face interaction. In this context, the issue of emotional intelligence development comes into focus as a solution to compensate for the undesirable effects that contact with technology has on this generation.

Keywords: digital resources, emotional intelligence, generation Z, students

Procedia PDF Downloads 177
7120 Oil-Oil Correlation Using Polar and Non-Polar Fractions of Crude Oil: A Case Study in Iranian Oil Fields

Authors: Morteza Taherinezhad, Ahmad Reza Rabbani, Morteza Asemani, Rudy Swennen

Abstract:

Oil-oil correlation is one of the most important issues in geochemical studies that enables to classify oils genetically. Oil-oil correlation is generally estimated based on non-polar fractions of crude oil (e.g., saturate and aromatic compounds). Despite several advantages, the drawback of using these compounds is their susceptibility of being affected by secondary processes. The polar fraction of crude oil (e.g., asphaltenes) has similar characteristics to kerogen, and this structural similarity is preserved during migration, thermal maturation, biodegradation, and water washing. Therefore, these structural characteristics can be considered as a useful correlation parameter, and it can be concluded that asphaltenes from different reservoirs with the same genetic signatures have a similar origin. Hence in this contribution, an integrated study by using both non-polar and polar fractions of oil was performed to use the merits of both fractions. Therefore, five oil samples from oil fields in the Persian Gulf were studied. Structural characteristics of extracted asphaltenes were investigated by Fourier transform infrared (FTIR) spectroscopy. Graphs based on aliphatic and aromatic compounds (predominant compounds in asphaltenes structure) and sulphoxide and carbonyl functional groups (which are representatives of sulphur and oxygen abundance in asphaltenes) were used for comparison of asphaltenes structures in different samples. Non-polar fractions were analyzed by GC-MS. The study of asphaltenes showed the studied oil samples comprise two oil families with distinct genetic characteristics. The first oil family consists of Salman and Reshadat oil samples, and the second oil family consists of Resalat, Siri E, and Siri D oil samples. To validate our results, biomarker parameters were employed, and this approach completely confirmed previous results. Based on biomarker analyses, both oil families have a marine source rock, whereby marl and carbonate source rocks are the source rock for the first and the second oil family, respectively.

Keywords: biomarker, non-polar fraction, oil-oil correlation, petroleum geochemistry, polar fraction

Procedia PDF Downloads 114
7119 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 128
7118 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 174
7117 A Pragmatic Approach of Memes Created in Relation to the COVID-19 Pandemic

Authors: Alexandra-Monica Toma

Abstract:

Internet memes are an element of computer mediated communication and an important part of online culture that combines text and image in order to generate meaning. This term coined by Richard Dawkings refers to more than a mere way to briefly communicate ideas or emotions, thus naming a complex and an intensely perpetuated phenomenon in the virtual environment. This paper approaches memes as a cultural artefact and a virtual trope that mirrors societal concerns and issues, and analyses the pragmatics of their use. Memes have to be analysed in series, usually relating to some image macros, which is proof of the interplay between imitation and creativity in the memes’ writing process. We believe that their potential to become viral relates to three key elements: adaptation to context, reference to a successful meme series, and humour (jokes, irony, sarcasm), with various pragmatic functions. The study also uses the concept of multimodality and stresses how the memes’ text interacts with the image, discussing three types of relations: symmetry, amplification, and contradiction. Moreover, the paper proves that memes could be employed as speech acts with illocutionary force, when the interaction between text and image is enriched through the connection to a specific situation. The features mentioned above are analysed in a corpus that consists of memes related to the COVID-19 pandemic. This corpus shows them to be highly adaptable to context, which helps build the feeling of connection and belonging in an otherwise tremendously fragmented world. Some of them are created based on well-known image macros, and their humour results from an intricate dialogue between texts and contexts. Memes created in relation to the COVID-19 pandemic can be considered speech acts and are often used as such, as proven in the paper. Consequently, this paper tackles the key features of memes, makes a thorough analysis of the memes sociocultural, linguistic, and situational context, and emphasizes their intertextuality, with special accent on their illocutionary potential.

Keywords: context, memes, multimodality, speech acts

Procedia PDF Downloads 178
7116 Image Segmentation: New Methods

Authors: Flaurence Benjamain, Michel Casperance

Abstract:

We present in this paper, first, a comparative study of three mathematical theories to achieve the fusion of information sources. This study aims to identify the characteristics inherent in theories of possibilities, belief functions (DST) and plausible and paradoxical reasoning to establish a strategy of choice that allows us to adopt the most appropriate theory to solve a problem of fusion in order, taking into account the acquired information and imperfections that accompany them. Using the new theory of plausible and paradoxical reasoning, also called Dezert-Smarandache Theory (DSmT), to fuse information multi-sources needs, at first step, the generation of the composites events witch is, in general, difficult. Thus, we present in this paper a new approach to construct pertinent paradoxical classes based on gray levels histograms, which also allows to reduce the cardinality of the hyper-powerset. Secondly, we developed a new technique for order and coding generalized focal elements. This method is exploited, in particular, to calculate the cardinality of Dezert and Smarandache. Then, we give an experimentation of classification of a remote sensing image that illustrates the given methods and we compared the result obtained by the DSmT with that resulting from the use of the DST and theory of possibilities.

Keywords: segmentation, image, approach, vision computing

Procedia PDF Downloads 257
7115 Digital Subsistence of Cultural Heritage: Digital Media as a New Dimension of Cultural Ecology

Authors: Dan Luo

Abstract:

With the climate change can exacerbate exposure of cultural heritage to climatic stressors, scholars pin their hope on digital technology can help the site avoid surprises. Virtual museum has been regarded as a highly effective technology that enables people to gain enjoyable visiting experience and immersive information about cultural heritage. The technology clearly reproduces the images of the tangible cultural heritage, and the aesthetic experience created by new media helps consumers escape from the realistic environment full of uncertainty. The new cultural anchor has appeared outside the cultural sites. This article synthesizes the international literature on the virtual museum by developing diagrams of Citespace focusing on the tangible cultural heritage and the alarmingly situation has emerged in the process of resolving climate change: (1) Digital collections are the different cultural assets for public. (2) The media ecology change people ways of thinking and meeting style of cultural heritage. (3) Cultural heritage may live forever in the digital world. This article provides a typical practice information to manage cultural heritage in a changing climate—the Dunhuang Mogao Grottoes in the far northwest of China, which is a worldwide cultural heritage site famous for its remarkable and sumptuous murals. This monument is a typical synthesis of art containing 735 Buddhist temples, which was listed by UNESCO as one of the World Cultural Heritage sites. The caves contain some extraordinary examples of Buddhist art spanning a period of 1,000 years - the architectural form, the sculptures in the caves, and the murals on the walls, all together constitute a wonderful aesthetic experience. Unfortunately, this magnificent treasure cave has been threatened by increasingly frequent dust storms and precipitation. The Dunhuang Academy has been using digital technology since the last century to preserve these immovable cultural heritages, especially the murals in the caves. And then, Dunhuang culture has become a new media culture after introduce the art to the world audience through exhibitions, VR, video, etc. The paper chooses qualitative research method that used Nvivo software to encode the collected material to answer this question. The author paid close attention to the survey in Dunhuang City, including participated in 10 exhibition and 20 salons that are Dunhuang-themed on network. What’s more, 308 visitors were interviewed who are fans of the art and have experienced Dunhuang culture online(6-75 years).These interviewees have been exposed to Dunhuang culture through different media, and they are acutely aware of the threat to this cultural heritage. The conclusion is that the unique halo of the cultural heritage was always emphasized, and digital media breeds twin brothers of cultural heritage. In addition, the digital media make it possible for cultural heritage to reintegrate into the daily life of the masses. Visitors gain the opportunity to imitate the mural figures through enlarged or emphasized images but also lose the perspective of understanding the whole cultural life. New media construct a new life aesthetics apart from the Authorized heritage discourse.

Keywords: cultural ecology, digital twins, life aesthetics, media

Procedia PDF Downloads 63
7114 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.

Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.

Procedia PDF Downloads 451
7113 Propagation of DEM Varying Accuracy into Terrain-Based Analysis

Authors: Wassim Katerji, Mercedes Farjas, Carmen Morillo

Abstract:

Terrain-Based Analysis results in derived products from an input DEM and these products are needed to perform various analyses. To efficiently use these products in decision-making, their accuracies must be estimated systematically. This paper proposes a procedure to assess the accuracy of these derived products, by calculating the accuracy of the slope dataset and its significance, taking as an input the accuracy of the DEM. Based on the output of previously published research on modeling the relative accuracy of a DEM, specifically ASTER and SRTM DEMs with Lebanon coverage as the area of study, analysis have showed that ASTER has a low significance in the majority of the area where only 2% of the modeled terrain has 50% or more significance. On the other hand, SRTM showed a better significance, where 37% of the modeled terrain has 50% or more significance. Statistical analysis deduced that the accuracy of the slope dataset, calculated on a cell-by-cell basis, is highly correlated to the accuracy of the input DEM. However, this correlation becomes lower between the slope accuracy and the slope significance, whereas it becomes much higher between the modeled slope and the slope significance.

Keywords: terrain-based analysis, slope, accuracy assessment, Digital Elevation Model (DEM)

Procedia PDF Downloads 429
7112 Randomness in Cybertext: A Study on Computer-Generated Poetry from the Perspective of Semiotics

Authors: Hongliang Zhang

Abstract:

The use of chance procedures and randomizers in poetry-writing can be traced back to surrealist works, which, by appealing to Sigmund Freud's theories, were still logocentrism. In the 1960s, random permutation and combination were extensively used by the Oulipo, John Cage and Jackson Mac Low, which further deconstructed the metaphysical presence of writing. Today, the randomly-generated digital poetry has emerged as a genre of cybertext which should be co-authored by readers. At the same time, the classical theories have now been updated by cybernetics and media theories. N· Katherine Hayles put forward the concept of ‘the floating signifiers’ by Jacques Lacan to be the ‘the flickering signifiers’ , arguing that the technology per se has become a part of the textual production. This paper makes a historical review of the computer-generated poetry in the perspective of semiotics, emphasizing that the randomly-generated digital poetry which hands over the dual tasks of both interpretation and writing to the readers demonstrates the intervention of media technology in literature. With the participation of computerized algorithm and programming languages, poems randomly generated by computers have not only blurred the boundary between encoder and decoder, but also raises the issue of human-machine. It is also a significant feature of the cybertext that the productive process of the text is full of randomness.

Keywords: cybertext, digital poetry, poetry generator, semiotics

Procedia PDF Downloads 160
7111 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation

Authors: Alae El Fahsi

Abstract:

This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.

Keywords: smart cities, digital governance, urban planning, strategic design

Procedia PDF Downloads 39
7110 Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography

Authors: Jorge Martinez-Garcia, Ingrid Stelzner, Joerg Stelzner, Damian Gwerder, Philipp Schuetz

Abstract:

Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator’s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species.

Keywords: ring recognition, edge detection, X-ray computed tomography, dendrochronology

Procedia PDF Downloads 200
7109 Investigating Best Strategies Towards Creating Alternative Assessment in Literature

Authors: Sandhya Rao Mehta

Abstract:

As ChatGpt and other Artificial Intelligence (AI) forms are becoming part of our regular academic world, the consequences are being gradually discussed. The extent to which an essay written by a student is itself of any value if it has been downloaded by some form of AI is perhaps central to this discourse. A larger question is whether writing should be taught as an academic skill at all. In literature classrooms, this has major consequences as writing a traditional paper is still the single most preferred form of assessment. This study suggests that it is imperative to investigate alternative forms of assessment in literature, not only because the existing forms can be written by AI, but in a larger sense, students are increasingly skeptical of the purpose of such work. The extent to which an essay actually helps the students professionally is a question that academia has not yet answered. This paper suggests that using real-world tasks like creating podcasts, video tutorials, and websites is a far better way to evaluate students' critical thinking and application of ideas, as well as to develop digital skills which are important to their future careers. Using the example of a course in literature, this study will examine the possibilities and challenges of creating digital projects as a way of confronting the complexities of student evaluation in the future. The study is based on a specific university English as a Foreign Language (EFL) context.

Keywords: assessment, literature, digital humanities, chatgpt

Procedia PDF Downloads 67
7108 A Correlation Between Perceived Usage of Project Management Methodologies and Project Success in Horizon 2020 Projects

Authors: Aurelio Palacardo, Giulio Mangano, Alberto De Marco

Abstract:

Nowadays, the global economic framework is extremely competitive, and it consequently requires an efficient deployment of the resources provided by EU. In this context, Project management practices are intended to be one of the levers for increasing such an efficiency. The objective of this work is to explore the usage of Project Management methodologies and good practices in the European-wide research program “Horizon2020” and establish whether their maturity might impact the project's success. This allows to identify strengths in terms of application of PM methodologies and good practices and, in turn, to provide feedback and opportunities for improvements to be implemented in future programs. In order to achieve this objective, the present research makes use of a survey-based data retrieval and correlation analysis to investigate the level of perceived PM maturity in H2020 projects and the correlation of maturity with project success. The results show the Project Managers involved in H2020 to hold a high level of PM maturity, confirming PM standards, which are imposed by the EU commission as a binding process, are effectively enforced.

Keywords: project management, project management maturity, maturity models, project success

Procedia PDF Downloads 144
7107 Liver and Liver Lesion Segmentation From Abdominal CT Scans

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The interpretation of medical images benefits from anatomical and physiological priors to optimize computer- aided diagnosis applications. Segmentation of liver and liver lesion is regarded as a major primary step in computer aided diagnosis of liver diseases. Precise liver segmentation in abdominal CT images is one of the most important steps for the computer-aided diagnosis of liver pathology. In this papers, a semi- automated method for medical image data is presented for the liver and liver lesion segmentation data using mathematical morphology. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological filters to extract the liver. The second step consists to detect the liver lesion. In this task; we proposed a new method developed for the semi-automatic segmentation of the liver and hepatic lesions. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to improve the quality of the original image and image gradient by applying the spatial filter followed by the morphological filters. The second step consists to calculate the internal and external markers of the liver and hepatic lesions. Thereafter we proceed to the liver and hepatic lesions segmentation by the watershed transform controlled by markers. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm

Keywords: anisotropic diffusion filter, CT images, hepatic lesion segmentation, Liver segmentation, morphological filter, the watershed algorithm

Procedia PDF Downloads 433
7106 Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images

Authors: Paulo Cesar Pereira Junior, Alexandre Monteiro, Rafael da Luz Ribeiro, Antonio Carlos Sobieranski, Aldo von Wangenheim

Abstract:

In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models.

Keywords: convolutional neural networks, deep learning, digital image processing, precision agriculture, semantic segmentation, unmanned aerial vehicles

Procedia PDF Downloads 228
7105 Young People, the Internet and Inequality: What are the Causes and Consequences of Exclusion?

Authors: Albin Wallace

Abstract:

Part of the provision within educational institutions is the design, commissioning and implementation of ICT facilities to improve teaching and learning. Inevitably, these facilities focus largely on Internet Protocol (IP) based provisions including access to the World Wide Web, email, interactive software and hardware tools. Educators should be committed to the use of ICT to improve learning and teaching as well as to issues relating to the Internet and educational disadvantage, especially with respect to access and exclusion concerns. In this paper I examine some recent research into the issue of inequality and use of the Internet during which I discuss the causes and consequences of exclusion in the context of social inequality, digital literacy and digital inequality, also touching on issues of global inequality.

Keywords: inequality, internet, education, design

Procedia PDF Downloads 471
7104 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami

Authors: Alejandra G. Barbón, José Vila, Dania Vazquez

Abstract:

This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.

Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience

Procedia PDF Downloads 55
7103 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 133
7102 Femoropatellar Groove: An Anatomical Study

Authors: Mamatha Hosapatna, Anne D. Souza, Vrinda Hari Ankolekar, Antony Sylvan D. Souza

Abstract:

Introduction: The lower extremity of the femur is characterized by an anterior groove in which patella is held during motion. This groove separates the two lips of the trochlea (medial and lateral), prolongation of the two condyles. In humans, the lateral trochlear lip is more developed than the medial one, creating an asymmetric groove that is also specific to the human body. Because of femoral obliquity, contraction of quadriceps leads to a lateral dislocation stress on the patella, and the more elevated lateral side of the patellar groove helps the patella stays in its correct place, acting as a wall against lateral dislocation. This specific shape fits an oblique femur. It is known that femoral obliquity is not genetically determined but comes with orthostatism and biped walking. Material and Methodology: To measure the various dimensions of the Femoropatellar groove (FPG) and femoral condyle using digital image analyser. 37 dried adult femora (22 right,15 left) were used for the study. End on images of the lower end of the femur was taken. Various dimensions of the Femoropatellar groove and FP angle were measured using image J software. Results were analyzed statistically. Results: Maximum of the altitude of medial condyle of the right femur is 4.98± 0.35 cm and of the left femur is 5.20±.16 cm. Maximum altitude of lateral condyle is 5.44±0.4 and 5.50±0.14 on the right and left side respectively. Medial length of the groove is 1.30±0.38 cm on the right side and on the left side is 1.88±0.16 cm. The lateral length of the groove on the right side is 1.900±.16 cm and left side is 1.88±0.16 cm. Femoropatellar angle is 136.38◦±2.59 on the right side and on the left side it is 142.38◦±7.0 Angle and dimensions of the femoropatellar groove on the medial and lateral sides were measured. Asymmetry in the patellar groove was observed. The lateral lip was found to be wider and bigger which correlated with the previous studies. An asymmetrical patellar groove with a protruding lateral side associated with an oblique femur is a specific mark of bipedal locomotion. Conclusion: Dimensions of FPG are important in maintaining the stability of patella and also in knee replacement surgeries. The implants used in to replace the patellofemoral compartment consist of a metal groove to fit on the femoral end and a plastic disc that attaches to the undersurface of the patella. The location and configuration of the patellofemoral groove of the distal femur are clinically significant in the mechanics and pathomechanics of the patellofemoral articulation.

Keywords: femoral patellar groove, femoro patellar angle, lateral condyle, medial condyle

Procedia PDF Downloads 378