Search results for: artificial recharge of groundwater
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2686

Search results for: artificial recharge of groundwater

286 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 183
285 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 135
284 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 50
283 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 183
282 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study

Authors: Vincenzo Piemonte

Abstract:

The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.

Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling

Procedia PDF Downloads 256
281 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 157
280 Evaluation of Azo Dye Toxicity Using Some Haematological and Histopathological Alterations in Fish Catla Catla

Authors: Jagruti Barot

Abstract:

The textile industry plays a major role in the economy of India and on the other side of the coin it is the major source for water pollution. As azo dyes is the largest dye class they are extensively used in many fields such as textile industry, leather tanning industry, paper production, food, colour photography, pharmaceuticals and medicine, cosmetic, hair colourings, wood staining, agricultural, biological and chemical research etc. In addition to these, they can have acute and/or chronic effects on organisms depending on their concentration and length of exposure when they discharged as effluent in the environment. The aim of this study was to assess the genotoxic and histotoxic potentials of environmentally relevant concentrations of RR 120 on Catla catla, important edible freshwater fingerlings. For this, healthy Catla catla fingerlings were procured from the Government Fish Farm and acclimatized in 100 L capacity and continuously aerated glass aquarium in laboratory for 15 days. According to APHA some physic-chemical parameters were measured and maintained such as temperature, pH, dissolve oxygen, alkalinity, total hardness. Water along with excreta had been changed every 24 hrs. All fingerlings were fed artificial food palates once a day @ body weight. After 15 days fingerlings were grouped in 5 (10 in each) and exposed to various concentrations of RR 120 (Control, 10, 20, 30 and 40 mg/L) and samples (peripheral blood and gills, kidney) were collected and analyzed at 96 hrs. of interval. All results were compared with the control. Micronuclei (MN), nuclear buds (NB), fragmented-apoptotic (FA) and bi-nucleated (BN) cells in blood cells and in tissues (gills and kidney cells) were observed. Prominent histopathological alterations were noticed in gills such as aneurism, hyperplasia, degenerated central axis, lifting of gill epithelium, curved secondary gill lamellae etc. Similarly kidney showed some detrimental changes like shrunken glomeruli with increased periglomerular space, degenerated renal tubules etc. Both haematological and histopathological changes clearly reveal the toxic potential of RR 120. This work concludes that water pollution assessment can be done by these two biomarkers which provide baseline to the further chromosomal or molecular work.

Keywords: micronuclei, genotoxicity, RR 120, Catla catla

Procedia PDF Downloads 207
279 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 92
278 Artificial Intelligence in Ethiopian Higher Education: The Impact of Digital Readiness Support, Acceptance, Risk, and Trust on Adoption

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, perceived risc, AI trust

Procedia PDF Downloads 18
277 Navigating Disruption: Key Principles and Innovations in Modern Management for Organizational Success

Authors: Ahmad Haidar

Abstract:

This research paper investigates the concept of modern management, concentrating on the development of managerial practices and the adoption of innovative strategies in response to the fast-changing business landscape caused by Artificial Intelligence (AI). The study begins by examining the historical context of management theories, tracing the progression from classical to contemporary models, and identifying key drivers of change. Through a comprehensive review of existing literature and case studies, this paper provides valuable insights into the principles and practices of modern management, offering a roadmap for organizations aiming to navigate the complexities of the contemporary business world. The paper examines the growing role of digital technology in modern management, focusing on incorporating AI, machine learning, and data analytics to streamline operations and facilitate informed decision-making. Moreover, the research highlights the emergence of new principles, such as adaptability, flexibility, public participation, trust, transparency, and digital mindset, as crucial components of modern management. Also, the role of business leaders is investigated by studying contemporary leadership styles, such as transformational, situational, and servant leadership, emphasizing the significance of emotional intelligence, empathy, and collaboration in fostering a healthy organizational culture. Furthermore, the research delves into the crucial role of environmental sustainability, corporate social responsibility (CSR), and corporate digital responsibility (CDR). Organizations strive to balance economic growth with ethical considerations and long-term viability. The primary research question for this study is: "What are the key principles, practices, and innovations that define modern management, and how can organizations effectively implement these strategies to thrive in the rapidly changing business landscape?." The research contributes to a comprehensive understanding of modern management by examining its historical context, the impact of digital technologies, the importance of contemporary leadership styles, and the role of CSR and CDR in today's business landscape.

Keywords: modern management, digital technology, leadership styles, adaptability, innovation, corporate social responsibility, organizational success, corporate digital responsibility

Procedia PDF Downloads 66
276 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
275 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.

Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles

Procedia PDF Downloads 257
274 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 194
273 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 112
272 Estimation of Forces Applied to Forearm Using EMG Signal Features to Control of Powered Human Arm Prostheses

Authors: Faruk Ortes, Derya Karabulut, Yunus Ziya Arslan

Abstract:

Myoelectric features gathering from musculature environment are considered on a preferential basis to perceive muscle activation and control human arm prostheses according to recent experimental researches. EMG (electromyography) signal based human arm prostheses have shown a promising performance in terms of providing basic functional requirements of motions for the amputated people in recent years. However, these assistive devices for neurorehabilitation still have important limitations in enabling amputated people to perform rather sophisticated or functional movements. Surface electromyogram (EMG) is used as the control signal to command such devices. This kind of control consists of activating a motion in prosthetic arm using muscle activation for the same particular motion. Extraction of clear and certain neural information from EMG signals plays a major role especially in fine control of hand prosthesis movements. Many signal processing methods have been utilized for feature extraction from EMG signals. The specific objective of this study was to compare widely used time domain features of EMG signal including integrated EMG(IEMG), root mean square (RMS) and waveform length(WL) for prediction of externally applied forces to human hands. Obtained features were classified using artificial neural networks (ANN) to predict the forces. EMG signals supplied to process were recorded during only type of muscle contraction which is isometric and isotonic one. Experiments were performed by three healthy subjects who are right-handed and in a range of 25-35 year-old aging. EMG signals were collected from muscles of the proximal part of the upper body consisting of: biceps brachii, triceps brachii, pectorialis major and trapezius. The force prediction results obtained from the ANN were statistically analyzed and merits and pitfalls of the extracted features were discussed with detail. The obtained results are anticipated to contribute classification process of EMG signal and motion control of powered human arm prosthetics control.

Keywords: assistive devices for neurorehabilitation, electromyography, feature extraction, force estimation, human arm prosthesis

Procedia PDF Downloads 367
271 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 64
270 Artificial Intelligence in Ethiopian Universities: The Influence of Technological Readiness, Acceptance, Perceived Risk, and Trust on Implementation - An Integrative Research Approach

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, risk, trust

Procedia PDF Downloads 15
269 Alteration Quartz-Kfeldspar-Apatite-Molybdenite at B Anomaly Prospection with Artificial Neural Network to Determining Molydenite Economic Deposits in Malala District, Western Sulawesi

Authors: Ahmad Lutfi, Nikolas Dhega

Abstract:

The Malala deposit in northwest Sulawesi is the only known porphyry molybdenum and the only source for rhenium, occurrence in Indonesia. The neural network method produces results that correspond very closely to those of the knowledge-based fuzzy logic method and weights of evidence method. This method required data of solid geology, regional faults, airborne magnetic, gamma-ray survey data and GIS data. This interpretation of the network output fits with the intuitive notion that a prospective area has characteristics that closely resemble areas known to contain mineral deposits. Contrasts with the weights of evidence and fuzzy logic methods, where, for a given grid location, each input-parameter value automatically results in an increase in the prospective estimated. Malala District indicated molybdenum anomalies in stream sediments from in excess of 15 km2 were obtained, including the Takudan Fault as most prominent structure with striking 40̊ to 60̊ over a distance of about 30 km and in most places weakly at anomaly B, developed over an area of 4 km2, with a ‘shell’ up to 50 m thick at the intrusive contact with minor mineralization occurring in the Tinombo Formation. Series of NW trending, steeply dipping fracture zones, named the East Zone has an estimated resource of 100 Mt at 0.14% MoS2 and minimum target of 150 Mt 0.25%. The Malala porphyries occur as stocks and dykes with predominantly granitic, with fluorine-poor class of molybdenum deposits and belongs to the plutonic sub-type. Unidirectional solidification textures consisting of subparallel, crenulated layers of quartz that area separated by layers of intrusive material textures. The deuteric nature of the molybdenum mineralization and the dominance of carbonate alteration.The nature of the Stage I with alteration barren quartz K‐feldspar; and Stage II with alteration quartz‐K‐feldspar‐apatite-molybdenite veins combined with the presence of disseminated molybdenite with primary biotite in the host intrusive.

Keywords: molybdenite, Malala, porphyries, anomaly B

Procedia PDF Downloads 153
268 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 438
267 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines

Authors: Bruna P. De Souza, Ricardo C. De Almeida

Abstract:

In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.

Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind

Procedia PDF Downloads 141
266 Analysing the Perception of Climate Hazards on Biodiversity Conservation in Mining Landscapes within Southwestern Ghana

Authors: Salamatu Shaibu, Jan Hernning Sommer

Abstract:

Integrating biodiversity conservation practices in mining landscapes ensures the continual provision of various ecosystem services to the dependent communities whilst serving as ecological insurance for corporate mining when purchasing reclamation security bonds. Climate hazards such as long dry seasons, erratic rainfall patterns, and extreme weather events contribute to biodiversity loss in addition to the impact due to mining. Both corporate mining and mine-fringe communities perceive the effect of climate on biodiversity from the context of the benefits they accrue, which motivate their conservation practices. In this study, pragmatic approaches including semi-structured interviews, field visual observation, and review were used to collect data on corporate mining employees and households of fringing communities in the southwestern mining hub. The perceived changes in the local climatic conditions and the consequences on environmental management practices that promote biodiversity conservation were examined. Using a thematic content analysis tool, the result shows that best practices such as concurrent land rehabilitation, reclamation ponds, artificial wetlands, land clearance, and topsoil management are directly affected by prolonging long dry seasons and erratic rainfall patterns. Excessive dust and noise generation directly affect both floral and faunal diversity coupled with excessive fire outbreaks in rehabilitated lands and nearby forest reserves. Proposed adaptive measures include engaging national conservation authorities to promote reforestation projects around forest reserves. National government to desist from using permit for mining concessions in forest reserves, engaging local communities through educational campaigns to control forest encroachment and burning, promoting community-based resource management to promote community ownership, and provision of stricter environmental legislation to compel corporate, artisanal, and small scale mining companies to promote biodiversity conservation.

Keywords: biodiversity conservation, climate hazards, corporate mining, mining landscapes

Procedia PDF Downloads 219
265 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 135
264 The Impact of Artificial Intelligence on Digital Factory

Authors: Mona Awad Wanis Gad

Abstract:

The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.

Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 37
263 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety

Authors: Mehmet Salih Bayraktutan

Abstract:

Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.

Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety

Procedia PDF Downloads 190
262 Optimizing PharmD Education: Quantifying Curriculum Complexity to Address Student Burnout and Cognitive Overload

Authors: Frank Fan

Abstract:

PharmD (Doctor of Pharmacy) education has confronted an increasing challenge — curricular overload, a phenomenon resulting from the expansion of curricular requirements, as PharmD education strives to produce graduates who are practice-ready. The aftermath of the global pandemic has amplified the need for healthcare professionals, leading to a growing trend of assigning more responsibilities to them to address the global healthcare shortage. For instance, the pharmacist’s role has expanded to include not only compounding and distributing medication but also providing clinical services, including minor ailments management, patient counselling and vaccination. Consequently, PharmD programs have responded by continually expanding their curricula adding more requirements. While these changes aim to enhance the education and training of future professionals, they have also led to unintended consequences, including curricular overload, student burnout, and a potential decrease in program quality. To address the issue and ensure program quality, there is a growing need for evidence-based curriculum reforms. My research seeks to integrate Cognitive Load Theory, emerging machine learning algorithms within artificial intelligence (AI), and statistical approaches to develop a quantitative framework for optimizing curriculum design within the PharmD program at the University of Toronto, the largest PharmD program within Canada, to provide quantification and measurement of issues that currently are only discussed in terms of anecdote rather than data. This research will serve as a guide for curriculum planners, administrators, and educators, aiding in the comprehension of how the pharmacy degree program compares to others within and beyond the field of pharmacy. It will also shed light on opportunities to reduce the curricular load while maintaining its quality and rigor. Given that pharmacists constitute the third-largest healthcare workforce, their education shares similarities and challenges with other health education programs. Therefore, my evidence-based, data-driven curriculum analysis framework holds significant potential for training programs in other healthcare professions, including medicine, nursing, and physiotherapy.

Keywords: curriculum, curriculum analysis, health professions education, reflective writing, machine learning

Procedia PDF Downloads 61
261 Financial and Economic Crisis as a Challenge for Non-Derogatibility of Human Rights

Authors: Mirjana Dokmanovic

Abstract:

The paper will introduce main findings of the research of the responses of the Central European and South Eastern European (CEE/SEE) countries to the global economic and financial crisis in 2008 from human rights and gender perspectives. The research methodology included desk research and qualitative analysis of the available data, studies, statistics, and reports produced by the governments, the UN agencies, international financial institutions (IFIs) and international network of civil society organizations. The main conclusion of the study is that the governments in the region missed to assess the impacts of their anti-crisis policies both ex ante and ex post from the standpoint of human rights and gender equality. Majority of the countries have focused their efforts solely on prompting up the banking and financial sectors, and construction business sectors. The tremendous debt which the states have accumulated for the rescue of banks and industries lead to further cuts in social expenses and reduction of public services. Decreasing state support to health care and social protection and declining family incomes made social services unaffordable for many families. Thus, the economic and financial crisis stirred up the care crisis that was absorbed by women’s intensifying unpaid work within a family and household to manage household survival strategy. On the other hand, increased burden of the care work weakened the position of women in the labour market and their opportunities to find a job. The study indicates that the artificial separation of the real economy and the sphere of social reproduction still persist. This has created additional burden of unpaid work of women within a family. The aim of this paper is to introduce the lessons learnt for future: (a) human rights may not be derogated in the times of crisis; (b) the obligation of states to mitigate negative impacts of economic policies to population, particularly to vulnerable groups, must be prioritized; (c) IFIs and business sector must be liable as duty bearers with respect to human rights commitments.

Keywords: CEE/SEE region, global financial and economic crisis, international financial institutions, human rights commitments, principle of non-derogability of human rights

Procedia PDF Downloads 204
260 Deciphering Orangutan Drawing Behavior Using Artificial Intelligence

Authors: Benjamin Beltzung, Marie Pelé, Julien P. Renoult, Cédric Sueur

Abstract:

To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences.

Keywords: cognition, deep learning, drawing behavior, interpretability

Procedia PDF Downloads 165
259 Bioreactor for Cell-Based Impedance Measuring with Diamond Coated Gold Interdigitated Electrodes

Authors: Roman Matejka, Vaclav Prochazka, Tibor Izak, Jana Stepanovska, Martina Travnickova, Alexander Kromka

Abstract:

Cell-based impedance spectroscopy is suitable method for electrical monitoring of cell activity especially on substrates that cannot be easily inspected by optical microscope (without fluorescent markers) like decellularized tissues, nano-fibrous scaffold etc. Special sensor for this measurement was developed. This sensor consists of corning glass substrate with gold interdigitated electrodes covered with diamond layer. This diamond layer provides biocompatible non-conductive surface for cells. Also, a special PPFC flow cultivation chamber was developed. This chamber is able to fix sensor in place. The spring contacts are connecting sensor pads with external measuring device. Construction allows real-time live cell imaging. Combining with perfusion system allows medium circulation and generating shear stress stimulation. Experimental evaluation consist of several setups, including pure sensor without any coating and also collagen and fibrin coating was done. The Adipose derived stem cells (ASC) and Human umbilical vein endothelial cells (HUVEC) were seeded onto sensor in cultivation chamber. Then the chamber was installed into microscope system for live-cell imaging. The impedance measurement was utilized by vector impedance analyzer. The measured range was from 10 Hz to 40 kHz. These impedance measurements were correlated with live-cell microscopic imaging and immunofluorescent staining. Data analysis of measured signals showed response to cell adhesion of substrates, their proliferation and also change after shear stress stimulation which are important parameters during cultivation. Further experiments plan to use decellularized tissue as scaffold fixed on sensor. This kind of impedance sensor can provide feedback about cell culture conditions on opaque surfaces and scaffolds that can be used in tissue engineering in development artificial prostheses. This work was supported by the Ministry of Health, grants No. 15-29153A and 15-33018A.

Keywords: bio-impedance measuring, bioreactor, cell cultivation, diamond layer, gold interdigitated electrodes, tissue engineering

Procedia PDF Downloads 301
258 Translation as a Foreign Language Teaching Tool: Results of an Experiment with University Level Students in Spain

Authors: Nune Ayvazyan

Abstract:

Since the proclamation of monolingual foreign-language learning methods (the Berlitz Method in the early 20ᵗʰ century and the like), the dilemma has been to allow or not to allow learners’ mother tongue in the foreign-language learning process. The reason for not allowing learners’ mother tongue is reported to create a situation of immersion where students will only use the target language. It could be argued that this artificial monolingual situation is defective, mainly because there are very few real monolingual situations in the society. This is mainly due to the fact that societies are nowadays increasingly multilingual as plurilingual speakers are the norm rather than an exception. More recently, the use of learners’ mother tongue and translation has been put under the spotlight as valid foreign-language teaching tools. The logic dictates that if learners were permitted to use their mother tongue in the foreign-language learning process, that would not only be natural, but also would give them additional means of participation in class, which could eventually lead to learning. For example, when learners’ metalinguistic skills are poor in the target language, a question they might have could be asked in their mother tongue. Otherwise, that question might be left unasked. Attempts at empirically testing the role of translation as a didactic tool in foreign-language teaching are still very scant. In order to fill this void, this study looks into the interaction patterns between students in two kinds of English-learning classes: one with translation and the other in English only (immersion). The experiment was carried out with 61 students enrolled in a second-year university subject in English grammar in Spain. All the students underwent the two treatments, classes with translation and in English only, in order to see how they interacted under the different conditions. The analysis centered on four categories of interaction: teacher talk, teacher-initiated student interaction, student-initiated student-to-teacher interaction, and student-to-student interaction. Also, pre-experiment and post-experiment questionnaires and individual interviews gathered information about the students’ attitudes to translation. The findings show that translation elicited more student-initiated interaction than did the English-only classes, while the difference in teacher-initiated interactional turns was not statistically significant. Also, student-initiated participation was higher in comprehension-based activities (into L1) as opposed to production-based activities (into L2). As evidenced by the questionnaires, the students’ attitudes to translation were initially positive and mainly did not vary as a result of the experiment.

Keywords: foreign language, learning, mother tongue, translation

Procedia PDF Downloads 162
257 Spanish Language Violence Corpus: An Analysis of Offensive Language in Twitter

Authors: Beatriz Botella-Gil, Patricio Martínez-Barco, Lea Canales

Abstract:

The Internet and ICT are an integral element of and omnipresent in our daily lives. Technologies have changed the way we see the world and relate to it. The number of companies in the ICT sector is increasing every year, and there has also been an increase in the work that occurs online, from sending e-mails to the way companies promote themselves. In social life, ICT’s have gained momentum. Social networks are useful for keeping in contact with family or friends that live far away. This change in how we manage our relationships using electronic devices and social media has been experienced differently depending on the age of the person. According to currently available data, people are increasingly connected to social media and other forms of online communication. Therefore, it is no surprise that violent content has also made its way to digital media. One of the important reasons for this is the anonymity provided by social media, which causes a sense of impunity in the victim. Moreover, it is not uncommon to find derogatory comments, attacking a person’s physical appearance, hobbies, or beliefs. This is why it is necessary to develop artificial intelligence tools that allow us to keep track of violent comments that relate to violent events so that this type of violent online behavior can be deterred. The objective of our research is to create a guide for detecting and recording violent messages. Our annotation guide begins with a study on the problem of violent messages. First, we consider the characteristics that a message should contain for it to be categorized as violent. Second, the possibility of establishing different levels of aggressiveness. To download the corpus, we chose the social network Twitter for its ease of obtaining free messages. We chose two recent, highly visible violent cases that occurred in Spain. Both of them experienced a high degree of social media coverage and user comments. Our corpus has a total of 633 messages, manually tagged, according to the characteristics we considered important, such as, for example, the verbs used, the presence of exclamations or insults, and the presence of negations. We consider it necessary to create wordlists that are present in violent messages as indicators of violence, such as lists of negative verbs, insults, negative phrases. As a final step, we will use automatic learning systems to check the data obtained and the effectiveness of our guide.

Keywords: human language technologies, language modelling, offensive language detection, violent online content

Procedia PDF Downloads 131