Search results for: deep convolutional neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5316

Search results for: deep convolutional neural networks

2946 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications

Authors: Farhad Salek, Shahaboddin Resalati

Abstract:

The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.

Keywords: second life battery, electric vehicles, degradation, neural network

Procedia PDF Downloads 66
2945 Solving the Wireless Mesh Network Design Problem Using Genetic Algorithm and Simulated Annealing Optimization Methods

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Bahgat A. Abdullatif, Ahmed M. Rabie

Abstract:

Mesh clients, mesh routers and gateways are components of Wireless Mesh Network (WMN). In WMN, gateways connect to Internet using wireline links and supply Internet access services for users. We usually need multiple gateways, which takes time and costs a lot of money set up, due to the limited wireless channel bit rate. WMN is a highly developed technology that offers to end users a wireless broadband access. It offers a high degree of flexibility contrasted to conventional networks; however, this attribute comes at the expense of a more complex construction. Therefore, a challenge is the planning and optimization of WMNs. In this paper, we concentrate on this challenge using a genetic algorithm and simulated annealing. The genetic algorithm and simulated annealing enable searching for a low-cost WMN configuration with constraints and determine the number of used gateways. Experimental results proved that the performance of the genetic algorithm and simulated annealing in minimizing WMN network costs while satisfying quality of service. The proposed models are presented to significantly outperform the existing solutions.

Keywords: wireless mesh networks, genetic algorithms, simulated annealing, topology design

Procedia PDF Downloads 459
2944 Participatory Testing of Precision Fertilizer Management Technologies in Mid-Hills of Nepal

Authors: Kedar Nath Nepal, Dyutiman Choudhary, Naba Raj Pandit, Yam Gahire

Abstract:

Crop fertilizer recommendations are outdated as these are based on the response trails conducted over half a century ago. Further, these recommendations were based on the response trials conducted over large geographical area ignoring the large spatial variability in indigenous nutrient supplying capacity of soils typical of most smallholder systems. Application of fertilizer following such blanket recommendation in fields with varying native nutrient supply capacity leads to under application in some places and over application in others leading to reduced nutrient-use-efficiency (NUE), loss of profitability, and increased environmental risks associated with loss of unutilized nutrient through emissions or leaching. Opportunities exist to further increase yield and profitability through a significant gain in fertilizer use efficiency with commercialization of affordable and precise application technologies. We conducted participatory trails in Maize (Zea Mays), Cauliflower (Brassica oleracea var. botrytis) and Tomato (Solanum lycopersicum) in Mid Hills of Nepal to evaluate the efficacy of Urea Deep Placement (UDP and Polymer Coated Urea (PCU);. UDP contains 46% of N having individual briquette size 2.7 gm each and PCU contains 44% of N . Both PCU and urea briquette applied at reduced amount (100 kg N/ha) during planting produced similar yields (p>0.05) compared with regular urea (200 Kg N/ha). . These fertilizers also reduced N fertilizer by 35 - 50% over government blanket recommendations. Further, PCU and urea briquette increased farmer’s net income by USD 60 to 80.

Keywords: high efficiency fertilizers, urea deep placement, briquette polymer coated urea, zea mays, brassica, lycopersicum, Nepal

Procedia PDF Downloads 174
2943 Subthalamic Nucleus in Adult Human Cadaveric Brain: A Morphometric Study

Authors: Mangala Kohli, P. A. Athira, Reeha Mahajan

Abstract:

The subthalamic nucleus (STN) is a biconvex nucleus situated in the diencephalon. The knowledge of the morphometry of the subthalamic nucleus is essential for accurate targeting of the nucleus during Deep Brain Stimulation. The present study aims to note the morphometry of the subthalamic nucleus in both the cerebral hemispheres which will prove to be of great value to radiologists and neurosurgeons. A cross‐sectional observational study was conducted in the Departments of Anatomy and Forensic Medicine, Lady Hardinge Medical College & Associated Hospitals, New Delhi on thirty adult cadaveric brain specimens of unclaimed and donated corpses. The specimens were categorized into 3 age groups: 20-35, 35-50 and above 50 years. All samples were collected after following the standard protocol for ethical clearance. The morphometric study of 60 subthalamic nucleus was thus conducted. Transverse section of the brain was made at a plane 4mm ventral to the plane containing mid commissural point. The dimensions of the subthalamic nucleus were measured bilaterally with the aid of digital Vernier caliper and magnifying glass. In the present study, the mean length and width and AC-PC length of the subthalamic nucleus was recorded on the right and left side in Group A, B and C. On comparison of mean of subthalamic nucleus dimensions between the right and left side in Group C, no statistically significant difference was observed. The length and width of subthalamic nucleus measured in the 3 age groups were compared with each other and the p value calculated. There was no statistically significant difference between the dimensions of Group A and B, Group B and C as well as Group A and C. The present study reveals that there is no significant reduction in the size of the nucleus was noted with increasing age. Thus, the values obtained in the present study can be used as a reference for various invasive and non-invasive procedures on subthalamic nucleus.

Keywords: cerebral hemisphere, deep brain stimulation, morphometry, subthalamic nucleus

Procedia PDF Downloads 187
2942 Definition and Core Components of the Role-Partner Allocation Problem in Collaborative Networks

Authors: J. Andrade-Garda, A. Anguera, J. Ares-Casal, M. Hidalgo-Lorenzo, J.-A. Lara, D. Lizcano, S. Suárez-Garaboa

Abstract:

In the current constantly changing economic context, collaborative networks allow partners to undertake projects that would not be possible if attempted by them individually. These projects usually involve the performance of a group of tasks (named roles) that have to be distributed among the partners. Thus, an allocation/matching problem arises that will be referred to as Role-Partner Allocation problem. In real life this situation is addressed by negotiation between partners in order to reach ad hoc agreements. Besides taking a long time and being hard work, both historical evidence and economic analysis show that such approach is not recommended. Instead, the allocation process should be automated by means of a centralized matching scheme. However, as a preliminary step to start the search for such a matching mechanism (or even the development of a new one), the problem and its core components must be specified. To this end, this paper establishes (i) the definition of the problem and its constraints, (ii) the key features of the involved elements (i.e., roles and partners); and (iii) how to create preference lists both for roles and partners. Only this way it will be possible to conduct subsequent methodological research on the solution method.     

Keywords: collaborative network, matching, partner, preference list, role

Procedia PDF Downloads 238
2941 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 47
2940 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study

Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang

Abstract:

Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.

Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks

Procedia PDF Downloads 204
2939 Devulcanization of Waste Rubber Tyre Utilizing Deep Eutectic Solvents and Ultrasonic Energy

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid, Kaveh Shahbaz, Suganti Ramarad

Abstract:

This particular study of interest aims to study the effect of coupling ultrasonic treatment with eutectic solvents in devulcanization process of waste rubber tyre. Specifically, three different types of Deep Eutectic Solvents (DES) were utilized, namely ChCl:Urea (1:2), ChCl:ZnCl₂ (1:2) and ZnCl₂:urea (2:7) in which their physicochemical properties were analysed and proven to have permissible water content that is less than 3.0 wt%, degradation temperature below 200ᵒC and freezing point below 60ᵒC. The mass ratio of rubber to DES was varied from 1:20-1:40, sonicated for 1 hour at 37 kHz and heated at variable time of 5-30 min at 180ᵒC. Energy dispersive x-rays (EDX) results revealed that the first two DESs give the highest degree of sulphur removal at 74.44 and 76.69% respectively with optimum heating time at 15 minutes whereby if prolonged, reformation of crosslink network would be experienced. Such is supported by the evidence shown by both FTIR and FESEM results where di-sulfide peak reappears at 30 minutes and morphological structures from 15 to 30 minutes change from smooth with high voidage to rigid with low voidage respectively. Furthermore, TGA curve reveals similar phenomena whereby at 15 minutes thermal decomposition temperature is at the lowest due to the decrease of molecular weight as a result of sulphur removal but increases back at 30 minutes. Type of bond change was also analysed whereby it was found that only di-sulphide bond was cleaved and which indicates partial-devulcanization. Overall, the results show that DES has a great potential to be used as devulcanizing solvent.

Keywords: crosslink network, devulcanization, eutectic solvents, reformation, ultrasonic

Procedia PDF Downloads 173
2938 Networking the Biggest Challenge in Hybrid Cloud Deployment

Authors: Aishwarya Shekhar, Devesh Kumar Srivastava

Abstract:

Cloud computing has emerged as a promising direction for cost efficient and reliable service delivery across data communication networks. The dynamic location of service facilities and the virtualization of hardware and software elements are stressing the communication networks and protocols, especially when data centres are interconnected through the internet. Although the computing aspects of cloud technologies have been largely investigated, lower attention has been devoted to the networking services without involving IT operating overhead. Cloud computing has enabled elastic and transparent access to infrastructure services without involving IT operating overhead. Virtualization has been a key enabler for cloud computing. While resource virtualization and service abstraction have been widely investigated, networking in cloud remains a difficult puzzle. Even though network has significant role in facilitating hybrid cloud scenarios, it hasn't received much attention in research community until recently. We propose Network as a Service (NaaS), which forms the basis of unifying public and private clouds. In this paper, we identify various challenges in adoption of hybrid cloud. We discuss the design and implementation of a cloud platform.

Keywords: cloud computing, networking, infrastructure, hybrid cloud, open stack, naas

Procedia PDF Downloads 429
2937 Component-Based Approach in Assessing Sewer Manholes

Authors: Khalid Kaddoura, Tarek Zayed

Abstract:

Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.

Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes

Procedia PDF Downloads 358
2936 An Approach to Maximize the Influence Spread in the Social Networks

Authors: Gaye Ibrahima, Mendy Gervais, Seck Diaraf, Ouya Samuel

Abstract:

In this paper, we consider the influence maximization in social networks. Here we give importance to initial diffuser called the seeds. The goal is to find efficiently a subset of k elements in the social network that will begin and maximize the information diffusion process. A new approach which treats the social network before to determine the seeds, is proposed. This treatment eliminates the information feedback toward a considered element as seed by extracting an acyclic spanning social network. At first, we propose two algorithm versions called SCG − algoritm (v1 and v2) (Spanning Connected Graphalgorithm). This algorithm takes as input data a connected social network directed or no. And finally, a generalization of the SCG − algoritm is proposed. It is called SG − algoritm (Spanning Graph-algorithm) and takes as input data any graph. These two algorithms are effective and have each one a polynomial complexity. To show the pertinence of our approach, two seeds set are determined and those given by our approach give a better results. The performances of this approach are very perceptible through the simulation carried out by the R software and the igraph package.

Keywords: acyclic spanning graph, centrality measures, information feedback, influence maximization, social network

Procedia PDF Downloads 251
2935 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 457
2934 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 219
2933 A Systematic Review and Meta-Analysis in Slow Gait Speed and Its Association with Worse Postoperative Outcomes in Cardiac Surgery

Authors: Vignesh Ratnaraj, Jaewon Chang

Abstract:

Background: Frailty is associated with poorer outcomes in cardiac surgery, but the heterogeneity in frailty assessment tools makes it difficult to ascertain its true impact in cardiac surgery. Slow gait speed is a simple, validated, and reliable marker of frailty. We performed a systematic review and meta-analysis to examine the effect of slow gait speed on postoperative cardiac surgical patients. Methods: PubMED, MEDLINE, and EMBASE databases were searched from January 2000 to August 2021 for studies comparing slow gait speed and “normal” gait speed. The primary outcome was in-hospital mortality. Secondary outcomes were composite mortality and major morbidity, AKI, stroke, deep sternal wound infection, prolonged ventilation, discharge to a healthcare facility, and ICU length of stay. Results: There were seven eligible studies with 36,697 patients. Slow gait speed was associated with an increased likelihood of in-hospital mortality (risk ratio [RR]: 2.32; 95% confidence interval [CI]: 1.87–2.87). Additionally, they were more likely to suffer from composite mortality and major morbidity (RR: 1.52; 95% CI: 1.38–1.66), AKI (RR: 2.81; 95% CI: 1.44–5.49), deep sternal wound infection (RR: 1.77; 95% CI: 1.59–1.98), prolonged ventilation >24 h (RR: 1.97; 95% CI: 1.48–2.63), reoperation (RR: 1.38; 95% CI: 1.05–1.82), institutional discharge (RR: 2.08; 95% CI: 1.61–2.69), and longer ICU length of stay (MD: 21.69; 95% CI: 17.32–26.05). Conclusion: Slow gait speed is associated with poorer outcomes in cardiac surgery. Frail patients are twofold more likely to die during hospital admission than non-frail counterparts and are at an increased risk of developing various perioperative complications.

Keywords: cardiac surgery, gait speed, recovery, frailty

Procedia PDF Downloads 73
2932 Hippocampus Proteomic of Major Depression and Antidepressant Treatment: Involvement of Cell Proliferation, Differentiation, and Connectivity

Authors: Dhruv J. Limaye, Hanga Galfalvy, Cheick A. Sissoko, Yung-yu Huang, Chunanning Tang, Ying Liu, Shu-Chi Hsiung, Andrew J. Dwork, Gorazd B. Rosoklija, Victoria Arango, Lewis Brown, J. John Mann, Maura Boldrini

Abstract:

Memory and emotion require hippocampal cell viability and connectivity and are disrupted in major depressive disorder (MDD). Applying shotgun proteomics and stereological quantification of neural progenitor cells (NPCs), intermediate neural progenitors (INPs), and mature granule neurons (GNs), to postmortem human hippocampus, identified differentially expressed proteins (DEPs), and fewer NPCs, INPs and GNs, in untreated MDD (uMDD) compared with non-psychiatric controls (CTRL) and antidepressant-treated MDD (MDDT). DEPs lower in uMDD vs. CTRL promote mitosis, differentiation, and prevent apoptosis. DEPs higher in uMDD vs. CTRL inhibit the cell cycle, and regulate cell adhesion, neurite outgrowth, and DNA repair. DEPs lower in MDDT vs. uMDD block cell proliferation. We observe group-specific correlations between numbers of NPCs, INPs, and GNs and an abundance of proteins regulating mitosis, differentiation, and apoptosis. Altered protein expression underlies hippocampus cellular and volume loss in uMDD, supports a trophic effect of antidepressants, and offers new treatment targets.

Keywords: proteomics, hippocampus, depression, mitosis, migration, differentiation, mitochondria, apoptosis, antidepressants, human brain

Procedia PDF Downloads 103
2931 Seismic Reflection Highlights of New Miocene Deep Aquifers in Eastern Tunisia Basin (North Africa)

Authors: Mourad Bédir, Sami Khomsi, Hakim Gabtni, Hajer Azaiez, Ramzi Gharsalli, Riadh Chebbi

Abstract:

Eastern Tunisia is a semi-arid area; located in the northern Africa plate; southern Mediterranean side. It is facing water scarcity, overexploitation, and decreasing of water quality of phreatic water table. Water supply and storage will not respond to the demographic and economic growth and demand. In addition, only 5 109 m3 of rainwater from 35 109 m3 per year renewable rain water supply can be retained and remobilized. To remediate this water deficiency, researches had been focused to near new subsurface deep aquifers resources. Among them, Upper Miocene sandstone deposits of Béglia, Saouaf, and Somaa Formations. These sandstones are known for their proven Hydrogeologic and hydrocarbon reservoir characteristics in the Tunisian margin. They represent semi-confined to confined aquifers. This work is based on new integrated approaches of seismic stratigraphy, seismic tectonics, and hydrogeology, to highlight and characterize these reservoirs levels for aquifer exploitation in semi-arid area. As a result, five to six third order sequence deposits had been highlighted. They are composed of multi-layered extended sandstones reservoirs; separated by shales packages. These reservoir deposits represent lowstand and highstand system tracts of these sequences, which represent lowstand and highstand system tracts of these sequences. They constitute important strategic water resources volumes for the region.

Keywords: Tunisia, Hydrogeology, sandstones, basin, seismic, aquifers, modeling

Procedia PDF Downloads 180
2930 Losing Benefits from Social Network Sites Usage: An Approach to Estimate the Relationship between Social Network Sites Usage and Social Capital

Authors: Maoxin Ye

Abstract:

This study examines the relationship between social network sites (SNS) usage and social capital. Because SNS usage can expand the users’ networks, and people who are connected in this networks may become resources to SNS users and lead them to advantage in some situation, it is important to estimate the relationship between SNS usage and ‘who’ is connected or what resources the SNS users can get. Additionally, ‘who’ can be divided in two aspects – people who possess high position and people who are different, hence, it is important to estimate the relationship between SNS usage and high position people and different people. This study adapts Lin’s definition of social capital and the measurement of position generator which tells us who was connected, and can be divided into the same two aspects as well. A national data of America (N = 2,255) collected by Pew Research Center is utilized to do a general regression analysis about SNS usage and social capital. The results indicate that SNS usage is negatively associated with each factor of social capital, and it suggests that, in fact, comparing with non-users, although SNS users can get more connections, the variety and resources of these connections are fewer. For this reason, we could lose benefits through SNS usage.

Keywords: social network sites, social capital, position generator, general regression

Procedia PDF Downloads 264
2929 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting

Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi

Abstract:

An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.

Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power

Procedia PDF Downloads 413
2928 Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks

Authors: Chung-Nan Lee, Sheng-Wei Chu, You-Chiun Wang

Abstract:

LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method.

Keywords: LTE networks, multicast, resource allocation, layered video

Procedia PDF Downloads 390
2927 Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing

Authors: McClain Thiel

Abstract:

Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions.

Keywords: monocular distancing, computer vision, facial analysis, 3D localization

Procedia PDF Downloads 143
2926 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 199
2925 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 276
2924 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 107
2923 Feasibility Study on the Application of Waste Materials for Production of Sustainable Asphalt Mixtures

Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman

Abstract:

Road networks are expanding all over the world during the past few decades to meet the increasing freight volumes created by the population growth and industrial development. At the same time, the rate of generation of solid wastes in the society is increasing with the population growth, technological development, and changes in the lifestyle of people. Thus, the management of solid wastes has become an acute problem. Accordingly, there is a need for greater efficiency in the construction and maintenance of road networks, in reducing the overall cost, especially the utilization of natural materials such as aggregates. An efficient means to reduce construction and maintenance costs of road networks is to replace natural (virgin) materials by secondary, recycled materials. Recycling will also help to reduce pressure on landfills and demand for extraction of natural virgin materials thus ensuring sustainability. Application of solid wastes in asphalt layer reduces not only environmental issues associated with waste disposal but also the demand for virgin materials which will subsequently result in sustainability. Therefore, this research aims to investigate the feasibility of the application of some of the waste materials such as glass, construction and demolition wastes, etc. as alternative materials in pavement construction, particularly flexible pavements. To this end, various combination of different waste materials in certain percentages is considered in designing the asphalt mixture. One of the goals of this research is to determine the optimum percentage of all these materials in the mixture. This is done through a series of tests to evaluate the volumetric properties and resilient modulus of the mixture. The information and data collected from these tests are used to select the adequate samples for further assessment through advanced tests such as triaxial dynamic test and fatigue test, in order to investigate the asphalt mixture resistance to permanent deformation and also cracking. This paper presents the results of these investigations on the application of waste materials in asphalt mixture for production of a sustainable asphalt mix.

Keywords: asphalt, glass, pavement, recycled aggregate, sustainability

Procedia PDF Downloads 237
2922 Geochemical Characterization of Geothermal Waters in Albania, Preliminary Results

Authors: Aurela Jahja, Katarzyna Wątor, Arjan Beqiraj, Piotr Rusiniak, Nevton Kodhelaj

Abstract:

Albanian geological terrains represent an important node of the Alpine – Mediterranean mountain belt and are divided into several predominantly NNW - SSE striking geotectonic units, which, based on the presence or lack of Cretaceous transgression and magmatic rocks, belong to Internal or External Albanides. The internal (Korabi, Mirdita and Gashi) units are characterized by the Lower Cretaceous discordance and the presence of abundant magmatic rocks whereas in the external (Alps, Krasta-Cukali, Kruja, Ionian, Sazani and Peri Adriatic Depression) units an almost continuous sedimentation from Triassic to Paleogene is evidenced. The internal and external units show relevant differences in both geothermal and heat flow density values. The gradient values vary from 15-21.3 to 36 mK/m, while the heat flow density ranges from 42 to 60 mW/m2, in the external (Preadriatic Depression) and internal (ophiolitic belt) units, respectively. The geothermal fluids, which are found in natural springs and deep oil wells of Albania, are located in four thermo-mineral provinces: a) Peshkopi (Korabi) province; b) Kruja province; c) Preadriatic basin province, and d) South Ionian province. Thirteen geothermal waters were sampled from 11 natural springs and 2 deep wells, of which 6 springs and 2 wells from Kruja, 1 spring from Peshkopia, 2 springs from Preadriatic basin and 2 springs South Ionian province. Temperature, pH and Electrical Conductivity were measured in situ, while in laboratory were analyzed by ICP method major anions and cations and several trace elements (B, Li, Sr, Rb, I, Br, etc.). The measured values of temperature, pH and electrical conductivity range within 17-63°C, 6.26-7.92 and 724- 26856µS/cm intervals, respectively. The chemical type of the Albania thermal waters is variable. In the Kruja province prevail the Cl-SO4-NaCa and Cl-Na-Ca water types; while SO4-Ca, HCO3-Ca and Cl-HCO3-Na-Ca, and Cl-Na are found in the provinces of Peshkopi, Ionian and Preadriatic basin, respectively. In the Cl-SO4-HCO3 triangular diagram most of the geothermal waters are close to the chloride corner that belong to “mature waters”, typical of geothermal deep and hot fluids. Only samples from the Ionian province are located within the region of high bicarbonate concentration and they can be classified as peripheral waters that may have mixed with cold groundwater. In the Na-Ca-Mg and Na-K-Mg triangular diagram the majority of waters fall in the corner of sodium, suggesting that their cation ratios are controlled by mineral-solution equilibrium. There is a linear relationship between Cl and B which indicates the mixing of geothermal water with cold water, where the low-chlorine thermal waters from Ionian basin and Preadriatic depression provinces are distinguished by high-chlorine thermal waters from Kruja province. The Cl/Br molar ration of the thermal waters from Kruja province ranges from 1000 to 2660 and separates them from the thermal waters of Ionian basin and Preadriatic depression provinces having Cl/Br molar ratio lower than 650. The apparent increase of Cl/Br molar ratio that correlates with the increasing of the chloride, is probably related with dissolution of the Halite.

Keywords: geothermal fluids, geotectonic units, natural springs, deep wells, mature waters, peripheral waters

Procedia PDF Downloads 220
2921 Economic Life of Iranians on Instagram and the Disturbance in Politics

Authors: Mohammad Zaeimzade

Abstract:

The development of communication technologies is clearly and rapidly moving towards reducing the distance between the virtual and real worlds. Of course, living in a two-spatial or two-globalized world or any other interpretation that means mixing real and virtual life is still relevant and debatable. In the present age of communication, where social networks have transformed the message equation and turned the audience out of passivity and turned into a user. Platforms have penetrated widely in various aspects of human life, from culture and education and economy. Among the messengers, Instagram, which is one of the most extensive image-based interactive networks, plays a significant role in the new economic life. It doesn't need much explanation that the era of thinking of every messenger as a non-insulating conductor that is just a neutral load has passed. Every messenger has its own economic, political and of course security background, Instagram is no exception to this rule and of course it leaves its effects in bio-economics as well. Iran, as the 19th largest economy in the world, has not been unaffected by new platforms, including Instagram, and their consequences in the economy. Generally, in the policy-making space, there are two simple and inflexible pessimistic or optimistic views on this issue, and each of the holders of these views usually have their own one-dimensional policy recommendations regarding how to deal with Instagram. Prescriptions that are usually very different and sometimes contradictory. In this article, we show that this confusion of policymakers is the result of not accurately describing the reality of its effect, and the reason for this inaccurate description is the existence of a conflict of interests in the eyes of describers and researchers. In this article, we first take a look at the main indicators of the Iranian economy, estimate the role of the digital economy in Iran's economic growth, then study the conflicting descriptions of the Instagram-based digital economy, the statistics that show the tolerance of economic users of Instagram in Iran. 300 thousand to 9 million have been estimated. Finally, we take a look at the government's actions in this matter, especially in the context of street riots in October and November 2022. And we suggest an intermediate idea.

Keywords: digital economy, instagram, conflict of interest, social networks

Procedia PDF Downloads 77
2920 Effectiveness of Interactive Integrated Tutorial in Teaching Medical Subjects to Dental Students: A Pilot Study

Authors: Mohammad Saleem, Neeta Kumar, Anita Sharma, Sazina Muzammil

Abstract:

It is observed that some of the dental students in our setting take less interest in medical subjects. Various teaching methods are focus of research interest currently and being tried to generate interest among students. An approach of interactive integrated tutorial was used to assess its feasibility in teaching medical subjects to dental undergraduates. The aim was to generate interest and promote active self-learning among students. The objectives were to (1) introduce the integrated interactive learning method through two departments, (2) get feedback from the students and faculty on feasibility and effectiveness of this method. Second-year students in Bachelor of Dental Surgery course were divided into two groups. Each group was asked to study physiology and pathology of a common and important condition (anemia and hypertension) in a week’s time. During the tutorial, students asked questions on physiology and pathology of that condition from each other in the presence of teachers of both physiology and pathology departments. The teachers acted only as facilitators. After the session, the feedback from students and faculty on this alternative learning method was obtained. Results: Majority of the students felt that this method of learning is enjoyable, helped to develop reasoning skills and ability to correlate and integrate the knowledge from two related fields. Majority of the students felt that this kind of learning led to better understanding of the topic and motivated them towards deep learning. Teachers observed that the study promoted interdepartmental cross-discipline collaboration and better students’ linkages. Conclusion: Interactive integrated tutorial is effective in motivating dental students for better and deep learning of medical subjects.

Keywords: active learning, education, integrated, interactive, self-learning, tutorials

Procedia PDF Downloads 316
2919 Harnessing Deep-Level Metagenomics to Explore the Three Dynamic One Health Areas: Healthcare, Domiciliary and Veterinary

Authors: Christina Killian, Katie Wall, Séamus Fanning, Guerrino Macori

Abstract:

Deep-level metagenomics offers a useful technical approach to explore the three dynamic One Health axes: healthcare, domiciliary and veterinary. There is currently limited understanding of the composition of complex biofilms, natural abundance of AMR genes and gene transfer occurrence in these ecological niches. By using a newly established small-scale complex biofilm model, COMBAT has the potential to provide new information on microbial diversity, antimicrobial resistance (AMR)-encoding gene abundance, and their transfer in complex biofilms of importance to these three One Health axes. Shotgun metagenomics has been used to sample the genomes of all microbes comprising the complex communities found in each biofilm source. A comparative analysis between untreated and biocide-treated biofilms is described. The basic steps include the purification of genomic DNA, followed by library preparation, sequencing, and finally, data analysis. The use of long-read sequencing facilitates the completion of metagenome-assembled genomes (MAG). Samples were sequenced using a PromethION platform, and following quality checks, binning methods, and bespoke bioinformatics pipelines, we describe the recovery of individual MAGs to identify mobile gene elements (MGE) and the corresponding AMR genotypes that map to these structures. High-throughput sequencing strategies have been deployed to characterize these communities. Accurately defining the profiles of these niches is an essential step towards elucidating the impact of the microbiota on each niche biofilm environment and their evolution.

Keywords: COMBAT, biofilm, metagenomics, high-throughput sequencing

Procedia PDF Downloads 58
2918 Free and Open Source Licences, Software Programmers, and the Social Norm of Reciprocity

Authors: Luke McDonagh

Abstract:

Over the past three decades, free and open source software (FOSS) programmers have developed new, innovative and legally binding licences that have in turn enabled the creation of innumerable pieces of everyday software, including Linux, Mozilla Firefox and Open Office. That FOSS has been highly successful in competing with 'closed source software' (e.g. Microsoft Office) is now undeniable, but in noting this success, it is important to examine in detail why this system of FOSS has been so successful. One key reason is the existence of networks or communities of programmers, who are bound together by a key shared social norm of 'reciprocity'. At the same time, these FOSS networks are not unitary – they are highly diverse and there are large divergences of opinion between members regarding which licences are generally preferable: some members favour the flexible ‘free’ or 'no copyleft' licences, such as BSD and MIT, while other members favour the ‘strong open’ or 'strong copyleft' licences such as GPL. This paper argues that without both the existence of the shared norm of reciprocity and the diversity of licences, it is unlikely that the innovative legal framework provided by FOSS would have succeeded to the extent that it has.

Keywords: open source, copyright, licensing, copyleft

Procedia PDF Downloads 375
2917 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 122