Search results for: digital image watermarking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5199

Search results for: digital image watermarking

1209 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 47
1208 Thermal Hysteresis Activity of Ice Binding Proteins during Ice Crystal Growth in Sucrose Solution

Authors: Bercem Kiran-Yildirim, Volker Gaukel

Abstract:

Ice recrystallization (IR) which occurs especially during frozen storage is an undesired process due to the possible influence on the quality of products. As a result of recrystallization, the total volume of ice remains constant, but the size, number, and shape of ice crystals change. For instance, as indicated in the literature, the size of ice crystals in ice cream increases due to recrystallization. This results in texture deterioration. Therefore, the inhibition of ice recrystallization is of great importance, not only for food industry but also for several other areas where sensitive products are stored frozen, like pharmaceutical products or organs and blood in medicine. Ice-binding proteins (IBPs) have the unique ability to inhibit ice growth and in consequence inhibit recrystallization. This effect is based on their ice binding affinity. In the presence of IBP in a solution, ice crystal growth is inhibited during temperature decrease until a certain temperature is reached. The melting during temperature increase is not influenced. The gap between melting and freezing points is known as thermal hysteresis (TH). In literature, the TH activity is usually investigated under laboratory conditions in IBP buffer solutions. In product applications (e.g., food) there are many other solutes present which may influence the TH activity. In this study, a subset of IBPs, so-called antifreeze proteins (AFPs), is used for the investigation of the influence of sucrose solution concentration on the TH activity. For the investigation, a polarization microscope (Nikon Eclipse LV100ND) equipped with a digital camera (Nikon DS-Ri1) and a cold stage (Linkam LTS420) was used. In a first step, the equipment was established and validated concerning the accuracy of TH measurements based on literature data.

Keywords: ice binding proteins, ice crystals, sucrose solution, thermal hysteresis

Procedia PDF Downloads 183
1207 Cybersecurity Challenges and Solutions in ICT Management at the Federal Polytechnic, Ado-Ekiti: A Quantitative Study

Authors: Innocent Uzougbo Onwuegbuzie, Siene Elizabeth Eke

Abstract:

This study investigates cybersecurity challenges and solutions in managing Information and Communication Technology (ICT) at the Federal Polytechnic, Ado-Ekiti, South-West Nigeria. The rapid evolution of ICT has revolutionized organizational operations and impacted various sectors, including education, healthcare, and finance. While ICT advancements facilitate seamless communication, complex data analytics, and strategic decision-making, they also introduce significant cybersecurity risks such as data breaches, ransomware, and other malicious attacks. These threats jeopardize the confidentiality, integrity, and availability of information systems, necessitating robust cybersecurity measures. The primary aim of this research is to identify prevalent cybersecurity challenges in ICT management, evaluate their impact on the institution's operations, and assess the effectiveness of current cybersecurity solutions. Adopting a quantitative research approach, data was collected through surveys and structured questionnaires from students, staff, and IT professionals at the Federal Polytechnic, Ado-Ekiti. The findings underscore the critical need for continuous investment in cybersecurity technologies, employee and student training, and regulatory compliance to mitigate evolving cyber threats. This research contributes to bridging the knowledge gap in cybersecurity management and provides valuable insights into effective strategies and technologies for safeguarding ICT systems in educational institutions. The study's objectives are to enhance the security posture of the Federal Polytechnic, Ado-Ekiti, in an increasingly digital world by identifying and addressing the cybersecurity challenges faced by its ICT management.

Keywords: cybersecurity challenges, cyber threat mitigation, federal polytechnic Ado-Ekiti, ICT management

Procedia PDF Downloads 40
1206 Food Safety and Quality Assurance and Skills Development among Farmers in Georgia

Authors: Kakha Nadiardze, Nana Phirosmanashvili

Abstract:

The goal of this paper is to present the problems of lack of information among farmers in food safety. Global food supply chains are becoming more and more diverse, making traceability systems much harder to implement across different food markets. In this abstract, we will present our work for analyzing the key developments in Georgian food market from regulatory controls to administrative procedures to traceability technologies. Food safety and quality assurance are most problematic issues in Georgia as food trade networks become more and more complex, food businesses are under more and more pressure to ensure that their products are safe and authentic. The theme follow-up principles from farm to table must be top-of-mind for all food manufacturers, farmers and retailers. Following the E. coli breakout last year, as well as more recent cases of food mislabeling, developments in food traceability systems is essential to food businesses if they are to present a credible brand image. Alongside this are the ever-developing technologies in food traceability networks, technologies that manufacturers and retailers need to be aware of if they are to keep up with food safety regulations and avoid recall. How to examine best practice in food management is the main question in order to protect company brand through safe and authenticated food. We are working with our farmers to work with our food safety experts and technology developers throughout the food supply chain. We provide time by time food analyses on heavy metals, pesticide residues and different pollutants. We are disseminating information among farmers how the latest food safety regulations will impact the methods to use to identify risks within their products.

Keywords: food safety, GMO, LMO, E. coli, quality

Procedia PDF Downloads 514
1205 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 467
1204 Landslide Vulnerability Assessment in Context with Indian Himalayan

Authors: Neha Gupta

Abstract:

Landslide vulnerability is considered as the crucial parameter for the assessment of landslide risk. The term vulnerability defined as the damage or degree of elements at risk of different dimensions, i.e., physical, social, economic, and environmental dimensions. Himalaya region is very prone to multi-hazard such as floods, forest fires, earthquakes, and landslides. With the increases in fatalities rates, loss of infrastructure, and economy due to landslide in the Himalaya region, leads to the assessment of vulnerability. In this study, a methodology to measure the combination of vulnerability dimension, i.e., social vulnerability, physical vulnerability, and environmental vulnerability in one framework. A combined result of these vulnerabilities has rarely been carried out. But no such approach was applied in the Indian Scenario. The methodology was applied in an area of east Sikkim Himalaya, India. The physical vulnerability comprises of building footprint layer extracted from remote sensing data and Google Earth imaginary. The social vulnerability was assessed by using population density based on land use. The land use map was derived from a high-resolution satellite image, and for environment vulnerability assessment NDVI, forest, agriculture land, distance from the river were assessed from remote sensing and DEM. The classes of social vulnerability, physical vulnerability, and environment vulnerability were normalized at the scale of 0 (no loss) to 1 (loss) to get the homogenous dataset. Then the Multi-Criteria Analysis (MCA) was used to assign individual weights to each dimension and then integrate it into one frame. The final vulnerability was further classified into four classes from very low to very high.

Keywords: landslide, multi-criteria analysis, MCA, physical vulnerability, social vulnerability

Procedia PDF Downloads 301
1203 Population Dynamics and Land Use/Land Cover Change on the Chilalo-Galama Mountain Range, Ethiopia

Authors: Yusuf Jundi Sado

Abstract:

Changes in land use are mostly credited to human actions that result in negative impacts on biodiversity and ecosystem functions. This study aims to analyze the dynamics of land use and land cover changes for sustainable natural resources planning and management. Chilalo-Galama Mountain Range, Ethiopia. This study used Thematic Mapper 05 (TM) for 1986, 2001 and Landsat 8 (OLI) data 2017. Additionally, data from the Central Statistics Agency on human population growth were analyzed. Semi-Automatic classification plugin (SCP) in QGIS 3.2.3 software was used for image classification. Global positioning system, field observations and focus group discussions were used for ground verification. Land Use Land Cover (LU/LC) change analysis was using maximum likelihood supervised classification and changes were calculated for the 1986–2001 and the 2001–2017 and 1986-2017 periods. The results show that agricultural land increased from 27.85% (1986) to 44.43% and 51.32% in 2001 and 2017, respectively with the overall accuracies of 92% (1986), 90.36% (2001), and 88% (2017). On the other hand, forests decreased from 8.51% (1986) to 7.64 (2001) and 4.46% (2017), and grassland decreased from 37.47% (1986) to 15.22%, and 15.01% in 2001 and 2017, respectively. It indicates for the years 1986–2017 the largest area cover gain of agricultural land was obtained from grassland. The matrix also shows that shrubland gained land from agricultural land, afro-alpine, and forest land. Population dynamics is found to be one of the major driving forces for the LU/LU changes in the study area.

Keywords: Landsat, LU/LC change, Semi-Automatic classification plugin, population dynamics, Ethiopia

Procedia PDF Downloads 85
1202 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 126
1201 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran

Authors: Saba Gachpaz, Hamid Reza Heidari

Abstract:

The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. To achieve this, more resources should be consumed and, besides other environmental concerns, highlight sustainable agricultural development. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for ten different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.

Keywords: land suitability, machine learning, random forest, sustainable agriculture

Procedia PDF Downloads 84
1200 Attracting European Youths to STEM Education and Careers: A Pedagogical Approach to a Hybrid Learning Environment

Authors: M. Assaad, J. Mäkiö, T. Mäkelä, M. Kankaanranta, N. Fachantidis, V. Dagdilelis, A. Reid, C. R. del Rio, E. V. Pavlysh, S. V. Piashkun

Abstract:

To bring science and society together in Europe, thus increasing the continent’s international competitiveness, STEM (science, technology, engineering and mathematics) education must be more relatable to European youths in their everyday life. STIMEY (Science, Technology, Innovation, Mathematics, Engineering for the Young) project researches and develops a hybrid educational environment with multi-level components that is being designed and developed based on a well-researched pedagogical framework, aiming to make STEM education more attractive to young people aged 10 to 18 years in this digital era. This environment combines social media components, robotic artefacts, and radio to educate, engage and increase students’ interest in STEM education and careers from a young age. Additionally, it offers educators the necessary modern tools to deliver STEM education in an attractive and engaging manner in or out of class. Moreover, it enables parents to keep track of their children’s education, and collaborate with their teachers on their development. Finally, the open platform allows businesses to invest in the growth of the youths’ talents and skills in line with the economic and labour market needs through entrepreneurial tools. Thus, universities, schools, teachers, students, parents, and businesses come together to complete a circle in which STEM becomes part of the daily life of youths through a hybrid educational environment that also prepares them for future careers.

Keywords: e-learning, entrepreneurship, pedagogy, robotics, serious gaming, social media, STEM education

Procedia PDF Downloads 373
1199 Use of Quasi-3D Inversion of VES Data Based on Lateral Constraints to Characterize the Aquifer and Mining Sites of an Area Located in the North-East of Figuil, North Cameroon

Authors: Fofie Kokea Ariane Darolle, Gouet Daniel Hervé, Koumetio Fidèle, Yemele David

Abstract:

The electrical resistivity method is successfully used in this paper in order to have a clearer picture of the subsurface of the North-East ofFiguil in northern Cameroon. It is worth noting that this method is most often used when the objective of the study is to image the shallow subsoils by considering them as a set of stratified ground layers. The problem to be solved is very often environmental, and in this case, it is necessary to perform an inversion of the data in order to have a complete and accurate picture of the parameters of the said layers. In the case of this work, thirty-three (33) Schlumberger VES have been carried out on an irregular grid to investigate the subsurface of the study area. The 1D inversion applied as a preliminary modeling tool and in correlation with the mechanical drillings results indicates a complex subsurface lithology distribution mainly consisting of marbles and schists. Moreover, the quasi-3D inversion with lateral constraint shows that the misfit between the observed field data and the model response is quite good and acceptable with a value low than 10%. The method also reveals existence of two water bearing in the considered area. The first is the schist or weathering aquifer (unsuitable), and the other is the marble or the fracturing aquifer (suitable). The final quasi 3D inversion results and geological models indicate proper sites for groundwaters prospecting and for mining exploitation, thus allowing the economic development of the study area.

Keywords: electrical resistivity method, 1D inversion, quasi 3D inversion, groundwaters, mining

Procedia PDF Downloads 155
1198 The Clustering of Multiple Sclerosis Subgroups through L2 Norm Multifractal Denoising Technique

Authors: Yeliz Karaca, Rana Karabudak

Abstract:

Multifractal Denoising techniques are used in the identification of significant attributes by removing the noise of the dataset. Magnetic resonance (MR) image technique is the most sensitive method so as to identify chronic disorders of the nervous system such as Multiple Sclerosis. MRI and Expanded Disability Status Scale (EDSS) data belonging to 120 individuals who have one of the subgroups of MS (Relapsing Remitting MS (RRMS), Secondary Progressive MS (SPMS), Primary Progressive MS (PPMS)) as well as 19 healthy individuals in the control group have been used in this study. The study is comprised of the following stages: (i) L2 Norm Multifractal Denoising technique, one of the multifractal technique, has been used with the application on the MS data (MRI and EDSS). In this way, the new dataset has been obtained. (ii) The new MS dataset obtained from the MS dataset and L2 Multifractal Denoising technique has been applied to the K-Means and Fuzzy C Means clustering algorithms which are among the unsupervised methods. Thus, the clustering performances have been compared. (iii) In the identification of significant attributes in the MS dataset through the Multifractal denoising (L2 Norm) technique using K-Means and FCM algorithms on the MS subgroups and control group of healthy individuals, excellent performance outcome has been yielded. According to the clustering results based on the MS subgroups obtained in the study, successful clustering results have been obtained in the K-Means and FCM algorithms by applying the L2 norm of multifractal denoising technique for the MS dataset. Clustering performance has been more successful with the MS Dataset (L2_Norm MS Data Set) K-Means and FCM in which significant attributes are obtained by applying L2 Norm Denoising technique.

Keywords: clinical decision support, clustering algorithms, multiple sclerosis, multifractal techniques

Procedia PDF Downloads 168
1197 The Influence of Surface Roughness on the Flow Fields Generated by an Oscillating Cantilever

Authors: Ciaran Conway, Nick Jeffers, Jeff Punch

Abstract:

With the current trend of miniaturisation of electronic devices, piezoelectric fans have attracted increasing interest as an alternative means of forced convection over traditional rotary solutions. Whilst there exists an abundance of research on various piezo-actuated flapping fans in the literature, the geometries of these fans all consist of a smooth rectangular cross section with thicknesses typically of the order of 100 um. The focus of these studies is primarily on variables such as frequency, amplitude, and in some cases resonance mode. As a result, the induced flow dynamics are a direct consequence of the pressure differential at the fan tip as well as the pressure-driven ‘over the top’ vortices generated at the upper and lower edges of the fan. Rough surfaces such as golf ball dimples or vortex generators on an aircraft wing have proven to be beneficial by tripping the boundary layer and energising the adjacent air flow. This paper aims to examine the influence of surface roughness on the airflow generation of a flapping fan and determine whether the induced wake can be manipulated or enhanced by energising the airflow around the fan tip. Particle Image Velocimetry (PIV) is carried out on mechanically oscillated rigid fans with various surfaces consisting of pillars, perforations and cell-like grids derived from the wing topology of natural fliers. The results of this paper may be used to inform the design of piezoelectric fans and possibly aid in understanding the complex aerodynamics inherent in flapping wing flight.

Keywords: aerodynamics, oscillating cantilevers, PIV, vortices

Procedia PDF Downloads 217
1196 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 476
1195 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 81
1194 One Nature under God, and Divisible: Augustine’s “Duality of Man” Applied to the Creation Stories of Genesis

Authors: Elizabeth Latham

Abstract:

The notion that women were created as innately inferior to men has yet to be expelled completely from the theological system of humankind. This question and the biblical exegesis it requires are of paramount importance to feminist philosophy—after all, the study can bear little fruit if we cannot even agree on equality within the theological roots of humanity. Augustine’s “Duality of Man” gives new context to the two creation stories in Genesis, texts especially relevant given the billions of people worldwide that ascribe to them as philosophical realities. Each creation story describes the origin of human beings and is matched with one of Augustine’s two orders of mankind. The first story describes the absolute origin of the human soul and is paired with Augustine’s notion of the “spiritual order” of a human being: divine and eternal, fulfilling the biblical idea that human beings were created in the image and likeness of God. The second creation story, in contrast, depicts those aspects of humanity that distinguish and separate us from God: doubt, fear, and sin. It also introduces gender as a concept for the first time in the Bible. This story is better matched with Augustine’s idea of the “natural order” of humanity, that by which he believes women, in fact, are inferior. In the synthesis of the two sources, one can see that the natural order and any inferiority that it implies are incidental and not intended in our creation. Gender inequality is introduced with and belongs in the category of human imperfection and to cite the Bible as encouraging it constitutes a gross misunderstanding of scripture. This is easy to see when we divide human nature into “spiritual” and “natural” and look carefully at where scripture falls.

Keywords: augustine, bible, duality of man, feminism, genesis

Procedia PDF Downloads 135
1193 Interoperability Standard for Data Exchange in Educational Documents in Professional and Technological Education: A Comparative Study and Feasibility Analysis for the Brazilian Context

Authors: Giovana Nunes Inocêncio

Abstract:

The professional and technological education (EPT) plays a pivotal role in equipping students for specialized careers, and it is imperative to establish a framework for efficient data exchange among educational institutions. The primary focus of this article is to address the pressing need for document interoperability within the context of EPT. The challenges, motivations, and benefits of implementing interoperability standards for digital educational documents are thoroughly explored. These documents include EPT completion certificates, academic records, and curricula. In conjunction with the prior abstract, it is evident that the intersection of IT governance and interoperability standards holds the key to transforming the landscape of technical education in Brazil. IT governance provides the strategic framework for effective data management, aligning with educational objectives, ensuring compliance, and managing risks. By adopting interoperability standards, the technical education sector in Brazil can facilitate data exchange, enhance data security, and promote international recognition of qualifications. The utilization of the XML (Extensible Markup Language) standard further strengthens the foundation for structured data exchange, fostering efficient communication, standardization of curricula, and enhancing educational materials. The IT governance, interoperability standards, and data management critical role in driving the quality, efficiency, and security of technical education. The adoption of these standards fosters transparency, stakeholder coordination, and regulatory compliance, ultimately empowering the technical education sector to meet the dynamic demands of the 21st century.

Keywords: interoperability, education, standards, governance

Procedia PDF Downloads 70
1192 The Imminent Other in Anna Deavere Smith’s Performance

Authors: Joy Shihyi Huang

Abstract:

This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.

Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance

Procedia PDF Downloads 155
1191 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 302
1190 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 49
1189 Terrorism: Impact on Nigeria’s Foreign Policy, 1999-2015

Authors: Omolaja Akolade Oluwaseunfunmi

Abstract:

This study seeks to ascertain the origin and history of terrorism in Nigeria, determine the causes of terrorism in Nigeria, examine Nigeria’s foreign policies from 1999 to 2015, evaluate how terrorist groups like Boko Haram and the Indigenous People of Biafra (IPOB) have affected Nigeria’s foreign policies in the international arena; ascertain the measures taken by the government in tackling terrorist acts in Nigeria and give recommendations on how to tackle this menace. The methodology used in this research is the analytical method. The study derives its data from both primary and secondary sources. Findings from fieldwork showed that terrorism has also become one of the most important fundamentals of Nigeria’s foreign policies and relations; respondents from the people interviewed showed that terrorism is a menace and that terrorism must be adequately tackled in other to achieve Nigeria’s foreign policy. Furthermore, results revealed that the fight against the scourge has increasingly gained legitimacy and justification among the international community particularly as many countries consider it to be their international obligation to support the global movement to ameliorate or eliminate the menace. In conclusion, this research made, among other recommendations, that the Nigerian government should ensure the provision of a good life for its citizens, the inter-connectivity of terrorist organizations must be defeated, the government should undergo a foreign policy drive designed at rebuilding its image in the international environment, and also the promotion of peace education among various government, religious institutions, private sector, and civil society groups should be encouraged.

Keywords: foreign policy, Boko Haram, movement for the emancipation of Niger delta (MEND), terrorism

Procedia PDF Downloads 26
1188 Flood Hazard Impact Based on Simulation Model of Potential Flood Inundation in Lamong River, Gresik Regency

Authors: Yunita Ratih Wijayanti, Dwi Rahmawati, Turniningtyas Ayu Rahmawati

Abstract:

Gresik is one of the districts in East Java Province, Indonesia. Gresik Regency has three major rivers, namely Bengawan Solo River, Brantas River, and Lamong River. Lamong River is a tributary of Bengawan Solo River. Flood disasters that occur in Gresik Regency are often caused by the overflow of the Lamong River. The losses caused by the flood were very large and certainly detrimental to the affected people. Therefore, to be able to minimize the impact caused by the flood, it is necessary to take preventive action. However, before taking preventive action, it is necessary to have information regarding potential inundation areas and water levels at various points. For this reason, a flood simulation model is needed. In this study, the simulation was carried out using the Geographic Information System (GIS) method with the help of Global Mapper software. The approach used in this simulation is to use a topographical approach with Digital Elevation Models (DEMs) data. DEMs data have been widely used for various researches to analyze hydrology. The results obtained from this flood simulation are the distribution of flood inundation and water level. The location of the inundation serves to determine the extent of the flooding that occurs by referring to the 50-100 year flood plan, while the water level serves to provide early warning information. Both will be very useful to find out how much loss will be caused in the future due to flooding in Gresik Regency so that the Gresik Regency Regional Disaster Management Agency can take precautions before the flood disaster strikes.

Keywords: flood hazard, simulation model, potential inundation, global mapper, Gresik Regency

Procedia PDF Downloads 84
1187 Evolving Mango Metaphor In Diaspora Literature: Maintaining Immigrant Identity Through Foodways

Authors: Constance Kirker

Abstract:

This paper examines examples of the shared use of mango references as a culinary metaphor powerful in maintaining immigrant identity in the works of diaspora authors from a variety of regions of the world, including South Asia, the Caribbean, and Africa, and across a variety of genres, including novels, culinary memoirs, and children’s books. There has been past criticism of so-called sari-mango literature, suggesting that use of the image of mango is a cliché, even “lazy,” attempt to “exoticize” and sentimentalize South Asia in particular. A broader review across national boundaries reveals that diaspora authors, including those beyond South Asia, write nostalgically about mango as much about the messy “full body” tactile experience of eating a mango as about the “exotic” quality of mango representing the “otherness” of their home country. Many of the narratives detail universal childhood food experiences that are more shared than exotic, such as a desire to subvert the adult societal rules of neatness and get very messy, or memories of small but memorable childhood transgressions such as stealing mangoes from a neighbor’s tree. In recent years, food technology has evolved, and mangoes have become more familiar and readily available in Europe and America, from smoothies and baby food to dried fruit snacks. The meaning associated with the imagery of mangoes for both writers and readers in diaspora literature evolves as well, and authors do not have to heed Salman Rushdie’s command, “There must be no tropical fruits in the title. No mangoes.”

Keywords: identity, immigrant diaspora, culinary metaphor, food studies

Procedia PDF Downloads 111
1186 Teachers' Technological Pedagogical and Content Knowledge and Technology Integration in Teaching and Learning in a Small Island Developing State: A Concept Paper

Authors: Aminath Waseela, Vinesh Chandra, Shaun Nykvist,

Abstract:

The success of technology integration initiatives hinges on the knowledge and skills of teachers to effectively integrate technology in classroom teaching. Consequently, gaining an understanding of teachers' technology knowledge and its integration can provide useful insights on strategies that can be adopted to enhance teaching and learning, especially in developing country contexts where research is scant. This paper extends existing knowledge on teachers' use of technology by developing a conceptual framework that recognises how three key types of knowledge; content, pedagogy, technology, and their integration are at the crux of teachers' technology use while at the same time is amenable to empirical studies. Although the aforementioned knowledge is important for effective use of technology that can result in enhanced student engagement, literature on how this knowledge leads to effective technology use and enhanced student engagement is limited. Thus, this theoretical paper proposes a framework to explore teachers' knowledge through the lens of the Technological Pedagogical and Content Knowledge (TPACK); the integration of technology in classroom teaching through the Substitution Augmentation Modification and Redefinition (SAMR) model and how this affects students' learning through the Bloom's Digital Taxonomy (BDT) lens. Studies using this framework could inform the design of professional development to support teachers to develop skills for effective use of available technology that can enhance student learning engagement.

Keywords: information and communication technology, ICT, in-service training, small island developing states, SIDS, student engagement, technology integration, technology professional development training, technological pedagogical and content knowledge, TPACK

Procedia PDF Downloads 147
1185 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service

Authors: Lai Wenfang

Abstract:

Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.

Keywords: artificial intelligence, natural language processing, machine learning, visualization

Procedia PDF Downloads 174
1184 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 128
1183 Portable Cardiac Monitoring System Based on Real-Time Microcontroller and Multiple Communication Interfaces

Authors: Ionel Zagan, Vasile Gheorghita Gaitan, Adrian Brezulianu

Abstract:

This paper presents the contributions in designing a mobile system named Tele-ECG implemented for remote monitoring of cardiac patients. For a better flexibility of this application, the authors chose to implement a local memory and multiple communication interfaces. The project described in this presentation is based on the ARM Cortex M0+ microcontroller and the ADAS1000 dedicated chip necessary for the collection and transmission of Electrocardiogram signals (ECG) from the patient to the microcontroller, without altering the performances and the stability of the system. The novelty brought by this paper is the implementation of a remote monitoring system for cardiac patients, having a real-time behavior and multiple interfaces. The microcontroller is responsible for processing digital signals corresponding to ECG and also for the implementation of communication interface with the main server, using GSM/Bluetooth SIMCOM SIM800C module. This paper translates all the characteristics of the Tele-ECG project representing a feasible implementation in the biomedical field. Acknowledgment: This paper was supported by the project 'Development and integration of a mobile tele-electrocardiograph in the GreenCARDIO© system for patients monitoring and diagnosis - m-GreenCARDIO', Contract no. BG58/30.09.2016, PNCDI III, Bridge Grant 2016, using the infrastructure from the project 'Integrated Center for research, development and innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for fabrication and control', Contract No. 671/09.04.2015, Sectoral Operational Program for Increase of the Economic Competitiveness co-funded from the European Regional Development Fund.

Keywords: Tele-ECG, real-time cardiac monitoring, electrocardiogram, microcontroller

Procedia PDF Downloads 272
1182 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 44
1181 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 139
1180 Electrospun Alginate Nanofibers Containing Spirulina Extract Double-Layered with Polycaprolactone Nanofibers

Authors: Seon Yeong Byeon, Hwa Sung Shin

Abstract:

Nanofibrous sheets are of interest in the beauty industries due to the properties of moisturizing, adhesion to skin and delivery of nutrient materials. The benefit and function of the cosmetic products should not be considered without safety thus a non-toxic manufacturing process is ideal when fabricating the products. In this study, we have developed cosmetic patches consisting of alginate and Spirulina extract, a marine resource which has antibacterial and antioxidant effects, without addition of harmful cross-linkers. The patches obtained their structural stabilities by layer-upon-layer electrospinning of an alginate layer on a formerly spread polycaprolactone (PCL) layer instead of crosslinking method. The morphological characteristics, release of Spirulina extract, water absorption, skin adhesiveness and cytotoxicity of the double-layered patches were assessed. The image of scanning electron microscopy (SEM) showed that the addition of Spirulina extract has made the fiber diameter of alginate layers thinner. Impregnation of Spirulina extract increased their hydrophilicity, moisture absorption ability and skin adhesive ability. In addition, wetting the pre-dried patches resulted in releasing the Spirulina extract within 30 min. The patches were detected to have no cytotoxicity in the human keratinocyte cell-based MTT assay, but rather showed increased cell viability. All the results indicate the bioactive and hydro-adhesive double-layered patches have an excellent applicability to bioproducts for personal skin care in the trend of ‘A mask pack a day’.

Keywords: alginate, cosmetic patch, electrospun nanofiber, polycaprolactone, Spirulina extract

Procedia PDF Downloads 347