Search results for: neural interface
396 A Numerical Investigation of Segmental Lining Joints Interactions in Tunnels
Authors: M. H. Ahmadi, A. Mortazavi, H. Zarei
Abstract:
Several authors have described the main mechanism of formation of cracks in the segment lining during the construction of tunnels with tunnel boring machines. A comprehensive analysis of segmental lining joints may help to guarantee a safe construction during Tunneling and serviceable stages. The most frequent types of segment damage are caused by a condition of uneven segment matching due to contact deficiencies. This paper investigated the interaction mechanism of precast concrete lining joints in tunnels. The Discrete Element Method (DEM) was used to analyze a typical segmental lining model consisting of six segment rings. In the analyses, typical segmental lining design parameters of the Ghomrood water conveyance tunnel, Iran were employed in the study. In the conducted analysis, the worst-case scenario of loading faced during the boring of Ghomrood tunnel was considered. This was associated with the existence of a crushed zone dipping at 75 degree at the location of the key segment. In the analysis, moreover, the effect of changes in horizontal stress ratio on the loads on the segment was assessed. The boundary condition associated with K (ratio of the horizontal to the vertical stress) values of 0.5, 1, 1.5 and 2 were applied to the model and separate analysis was conducted for each case. Important parameters such as stress, moments, and displacements were measured at joint locations and the surrounding rock. Accordingly, the segment joint interactions were assessed and analyzed. Moreover, rock mass properties of the Ghomrood in Ghom were adopted. In this study, the load acting on segments joints are included a crushed zone stratum force that intersect tunnel with 75 slopes in the location of the key segment, gravity force of segments and earth pressures. A numerical investigation was used for different coefficients of stress concentration of 0.5, 1, 1.5, 2 and different geological conditions of saturated crushed zone under the critical scenario. The numerical results also demonstrate that maximum bending moments in longitudinal joints occurred for crushed zone with the weaken strengths (Sandstone). Besides that, increasing the load in segment-stratum interfaces affected radial stress in longitudinal joints and finally the opening of joints occurred.Keywords: joint, interface, segment, contact
Procedia PDF Downloads 258395 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning
Authors: Yong Chen
Abstract:
To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference
Procedia PDF Downloads 120394 Communities as a Source of Evidence: A Case of Advocating for Improved Human Resources for Health in Uganda
Authors: Asinguza P. Allan
Abstract:
The Advocacy for Better Health aims to equip citizens with enabling environment and systems to effectively advocate for strong action plans to improve health services. This is because the 2020 Government target for Uganda to transform into a middle income country will be achieved if investment is made in keeping the population healthy and productive. Citizen participation as an important foundation for change has been emphasized to gather data through participatory rural appraisal and inform evidence-based advocacy for recruitment and motivation of human resources. Citizens conduct problem ranking during advocacy forums on staffing levels and health worker absenteeism. Citizens prioritised inadequate number of midwives and absenteeism. On triangulation, health worker to population ratio in Uganda remains at 0.25/1,000 which is far below the World Health Organization (WHO) threshold of 2.3/1,000. Working with IntraHealth, the project advocated for recruitment of critical skilled staff (doctors and midwives) and scale up health workers motivation strategy to reduce Uganda’s Neonatal Mortality Rate of 22/1,000 and Maternal Mortality Ratio of 320/100,000. Government has committed to increase staffing to 80% by 2018 (10 districts have passed ordinances and revived use of duty rosters to address health worker absenteeism. On the other hand, the better health advocacy debate has been elevated with need to increase health sector budget allocations from 8% to 10%. The project has learnt that building a body of evidence from citizens enhances the advocacy agenda. Communities will further monitor government commitments to reduce Neonatal Mortality Rate and Maternal Mortality Ratio. The project has learnt that interface meeting between duty bearers and the community allows for immediate feedback and the process is a strong instrument for empowerment. It facilitates monitoring and performance evaluation of services, projects and government administrative units (like district assemblies) by the community members themselves. This, in turn, makes the human resources in health to be accountable, transparent and responsive to communities where they work. This, in turn, promotes human resource performance.Keywords: advocacy, empowerment, evidence, human resources
Procedia PDF Downloads 217393 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 109392 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 22391 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 82390 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 14389 Identification of Blood Biomarkers Unveiling Early Alzheimer's Disease Diagnosis Through Single-Cell RNA Sequencing Data and Autoencoders
Authors: Hediyeh Talebi, Shokoofeh Ghiam, Changiz Eslahchi
Abstract:
Traditionally, Alzheimer’s disease research has focused on genes with significant fold changes, potentially neglecting subtle but biologically important alterations. Our study introduces an integrative approach that highlights genes crucial to underlying biological processes, regardless of their fold change magnitude. Alzheimer's Single-cell RNA-seq data related to the peripheral blood mononuclear cells (PBMC) was extracted from the Gene Expression Omnibus (GEO). After quality control, normalization, scaling, batch effect correction, and clustering, differentially expressed genes (DEGs) were identified with adjusted p-values less than 0.05. These DEGs were categorized based on cell-type, resulting in four datasets, each corresponding to a distinct cell type. To distinguish between cells from healthy individuals and those with Alzheimer's, an adversarial autoencoder with a classifier was employed. This allowed for the separation of healthy and diseased samples. To identify the most influential genes in this classification, the weight matrices in the network, which includes the encoder and classifier components, were multiplied, and focused on the top 20 genes. The analysis revealed that while some of these genes exhibit a high fold change, others do not. These genes, which may be overlooked by previous methods due to their low fold change, were shown to be significant in our study. The findings highlight the critical role of genes with subtle alterations in diagnosing Alzheimer's disease, a facet frequently overlooked by conventional methods. These genes demonstrate remarkable discriminatory power, underscoring the need to integrate biological relevance with statistical measures in gene prioritization. This integrative approach enhances our understanding of the molecular mechanisms in Alzheimer’s disease and provides a promising direction for identifying potential therapeutic targets.Keywords: alzheimer's disease, single-cell RNA-seq, neural networks, blood biomarkers
Procedia PDF Downloads 67388 Perinatal Ethanol Exposure Modifies CART System in Rat Brain Anticipated for Development of Anxiety, Depression and Memory Deficits
Authors: M. P. Dandekar, A. P. Bharne, P. T. Borkar, D. M. Kokare, N. K. Subhedar
Abstract:
Ethanol ingestion by the mother ensue adverse consequences for her offspring. Herein, we examine the behavioral phenotype and neural substrate of the offspring of the mother on ethanol. Female rats were fed with ethanol-containing liquid diet from 8 days prior of conception and continued till 25 days post-parturition to coincide with weaning. Behavioral changes associated with anxiety, depression and learning and memory were assessed in the offspring, after they attained adulthood (day 85), using elevated plus maze (EPM), forced swim (FST) and novel object recognition tests (NORT), respectively. The offspring of the alcoholic mother, compared to those of the pair-fed mother, spent significantly more time in closed arms of EPM and showed more immobility time in FST. Offspring at the age of 25 and 85 days failed to discriminate between novel versus familiar object in NORT, thus reflecting anxiogenic, depressive and amnesic phenotypes. Neuropeptide cocaine- and amphetamine-regulated transcript peptide (CART) is known to be involved in central effects of ethanol and hence selected for the current study. Twenty-five days old pups of the alcoholic mother showed significant augmentation in CART-immunoreactivity in the cells of Edinger-Westphal (EW) nucleus and lateral hypothalamus. However, a significant decrease in CART-immunoreactivity was seen in nucleus accumbens shell (AcbSh), lateral part of bed nucleus of the stria terminalis (BNSTl), locus coeruleus (LC), hippocampus (CA1, CA2 and CA3), and arcuate nucleus (ARC) of the pups and/or adults offspring. While no change in the CART-immunoreactive fibers of AcbSh and BNSTl, CA2 and CA3 was noticed in the 25 days old pups, the CART-immunoreactive cells in EW and paraventricular nucleus (PVN), and fibers in the central nucleus of amygdala of 85 days old offspring remained unaffected. We suggest that the endogenous CART system in these discrete areas, among other factors, may be a causal to the abnormalities in the next generation of an alcoholic mother.Keywords: anxiety, depression, CART, ethanol, immunocytochemistry
Procedia PDF Downloads 395387 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding
Procedia PDF Downloads 138386 Flexible Current Collectors for Printed Primary Batteries
Authors: Vikas Kumar
Abstract:
Portable batteries are reliable source of mobile energy to power smart wearable electronics, medical devices, communications, and others internet of thing (IoT) devices. There is a continuous increase in demand for thinner, more flexible battery with high energy density and reliability to meet the requirement. For a flexible battery, factors that affect these properties are the stability of current collectors, electrode materials and their interfaces with the corrosive electrolytes. State-of-the-art conventional and flexible batteries utilise carbon as an electrode and current collectors which cause high internal resistance (~100 ohms) and limit the peak current to ~1mA. This makes them unsuitable for a wide range of applications. Replacing the carbon parts with metallic components would reduce the internal resistance (and hence reduce parasitic loss), but significantly increases the risk of corrosion due to galvanic interactions within the battery. To overcome these challenges, low cost electroplated nickel (Ni) on copper (Cu) was studied as a potential anode current collector for a zinc-manganese oxide primary battery with different concentration of NH4Cl/ZnCl2 electrolyte. Using electrical impedance spectroscopy (EIS), we monitored the open circuit potential (OCP) of electroplated nickel (different thicknesses) in different concentration of electrolytes to optimise the thickness of Ni coating. Our results show that electroless Ni coating suffer excessive corrosion in these electrolytes. Corrosion rates of Ni coatings for different concentrations of electrolytes have been calculated with Tafel analysis. These results suggest that for electroplated Ni, channelling and/or open porosity is a major issue, which was confirmed by morphological analysis. These channels are an easy pathway for electrolyte to penetrate thorough Ni to corrode the Ni/Cu interface completely. We further investigated the incorporation of a special printed graphene layer on Ni to provide corrosion protection in this corrosive electrolyte medium. We find that the incorporation of printed graphene layer provides the corrosion protection to the Ni and enhances the chemical bonding between the active materials and current collector and also decreases the overall internal resistance of the battery system.Keywords: corrosion, electrical impedance spectroscopy, flexible battery, graphene, metal current collector
Procedia PDF Downloads 129385 Development of Numerical Method for Mass Transfer across the Moving Membrane with Selective Permeability: Approximation of the Membrane Shape by Level Set Method for Numerical Integral
Authors: Suguru Miyauchi, Toshiyuki Hayase
Abstract:
Biological membranes have selective permeability, and the capsules or cells enclosed by the membrane show the deformation by the osmotic flow. This mass transport phenomenon is observed everywhere in a living body. For the understanding of the mass transfer in a body, it is necessary to consider the mass transfer phenomenon across the membrane as well as the deformation of the membrane by a flow. To our knowledge, in the numerical analysis, the method for mass transfer across the moving membrane has not been established due to the difficulty of the treating of the mass flux permeating through the moving membrane with selective permeability. In the existing methods for the mass transfer across the membrane, the approximate delta function is used to communicate the quantities on the interface. The methods can reproduce the permeation of the solute, but cannot reproduce the non-permeation. Moreover, the computational accuracy decreases with decreasing of the permeable coefficient of the membrane. This study aims to develop the numerical method capable of treating three-dimensional problems of mass transfer across the moving flexible membrane. One of the authors developed the numerical method with high accuracy based on the finite element method. This method can capture the discontinuity on the membrane sharply due to the consideration of the jumps in concentration and concentration gradient in the finite element discretization. The formulation of the method takes into account the membrane movement, and both permeable and non-permeable membranes can be treated. However, searching the cross points of the membrane and fluid element boundaries and splitting the fluid element into sub-elements are needed for the numerical integral. Therefore, cumbersome operation is required for a three-dimensional problem. In this paper, we proposed an improved method to avoid the search and split operations, and confirmed its effectiveness. The membrane shape was treated implicitly by introducing the level set function. As the construction of the level set function, the membrane shape in one fluid element was expressed by the shape function of the finite element method. By the numerical experiment, it was found that the shape function with third order appropriately reproduces the membrane shapes. The same level of accuracy compared with the previous method using search and split operations was achieved by using a number of sampling points of the numerical integral. The effectiveness of the method was confirmed by solving several model problems.Keywords: finite element method, level set method, mass transfer, membrane permeability
Procedia PDF Downloads 251384 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization
Authors: Michael Howard
Abstract:
The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL
Procedia PDF Downloads 63383 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 223382 Academic Identities in Transition
Authors: Caroline Selai, Sushrut Jadhav
Abstract:
Background: University College London (UCL), the first secular university in England to admit students regardless of their religion and gender, has nearly 29,000 students of which approximately 30% are international students. The UCL Cultural Consultation Service (CCS) for staff and students is a unique service that provides assistance to staff and students experiencing challenges in their teaching, enabling, support work or studies which they believe may have a cultural component. The service provides one-to-one and group consultations, lectures, seminars, ‘grand rounds’, interactive workshops and bespoke interventions. Data: This paper presents a content analysis of CCS referrals over the last 36 months. We focus on the experience of international students, many of whom experience not only a challenge to their academic identity but also a profound challenge to their personal cultural identity. We also present 3 vignettes to illustrate how students interpret, accept, contest and resist changes in their cultural and academic identity. Discussion: This paper highlights (i) how students from collectivist cultures attempt to assimilate within an individualistic, highly competitive western university that is bound by its own institutional norms; (ii) problems in negotiating challenges at the interface of culture and gender (iii) the impact of culturally different hierarchies of power, discrimination and authority and (iv) the significance of earlier traumatic and kinship conflicts. Many international students’ social identities are shaped by their cultural and family scripts. A large number have been taught that their teachers are to be revered and their teachings unchallenged. This is at odds with quintessential goal of the western university to encourage healthy scepticism and hone students’ critical thinking skills. Conclusions: Pupil-teacher ‘cultural transference’ and shifts in cultural academic identities of students underscore critical aspects of developmental and learning challenges for students. Staff-student cultural conflict requires a broader, systemic analysis of students, staff and the wider organisation. Our findings challenge Eurocentric psychodynamic concepts such as the nature of parent-child relationship in Western Europe. We argue for a broader, more inclusive approach to develop both effective pedagogic skills in euro-american academic institutions and culturally- appropriate psychodynamic theory to underpin counselling international students.Keywords: academic identity, cultural transference, cultural consultation in higher education, cultural formulation, cultural identity.
Procedia PDF Downloads 461381 Controlled Doping of Graphene Monolayer
Authors: Vedanki Khandenwal, Pawan Srivastava, Kartick Tarafder, Subhasis Ghosh
Abstract:
We present here the experimental realization of controlled doping of graphene monolayers through charge transfer by trapping selected organic molecules between the graphene layer and underlying substrates. This charge transfer between graphene and trapped molecule leads to controlled n-type or p-type doping in monolayer graphene (MLG), depending on whether the trapped molecule acts as an electron donor or an electron acceptor. Doping controllability has been validated by a shift in corresponding Raman peak positions and a shift in Dirac points. In the transfer characteristics of field effect transistors, a significant shift of Dirac point towards positive or negative gate voltage region provides the signature of p-type or n-type doping of graphene, respectively, as a result of the charge transfer between graphene and the organic molecules trapped within it. In order to facilitate the charge transfer interaction, it is crucial for the trapped molecules to be situated in close proximity to the graphene surface, as demonstrated by findings in Raman and infrared spectroscopies. However, the mechanism responsible for this charge transfer interaction has remained unclear at the microscopic level. Generally, it is accepted that the dipole moment of adsorbed molecules plays a crucial role in determining the charge-transfer interaction between molecules and graphene. However, our findings clearly illustrate that the doping effect primarily depends on the reactivity of the constituent atoms in the adsorbed molecules rather than just their dipole moment. This has been illustrated by trapping various molecules at the graphene−substrate interface. Dopant molecules such as acetone (containing highly reactive oxygen atoms) promote adsorption across the entire graphene surface. In contrast, molecules with less reactive atoms, such as acetonitrile, tend to adsorb at the edges due to the presence of reactive dangling bonds. In the case of low-dipole moment molecules like toluene, there is a lack of substantial adsorption anywhere on the graphene surface. Observation of (i) the emergence of the Raman D peak exclusively at the edges for trapped molecules without reactive atoms and throughout the entire basal plane for those with reactive atoms, and (ii) variations in the density of attached molecules (with and without reactive atoms) to graphene with their respective dipole moments provides compelling evidence to support our claim. Additionally, these observations were supported by first principle density functional calculations.Keywords: graphene, doping, charge transfer, liquid phase exfoliation
Procedia PDF Downloads 65380 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 130379 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 122378 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model
Authors: Gholba Niranjan Dilip, Anil Kumar
Abstract:
Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector
Procedia PDF Downloads 161377 Science and Mathematics Instructional Strategies, Teaching Performance and Academic Achievement in Selected Secondary Schools in Upland
Authors: Maria Belen C. Costa, Liza C. Costa
Abstract:
Teachers have an important influence on students’ academic achievement. Teachers play a crucial role in educational attainment because they stand in the interface of the transmission of knowledge, values, and skills in the learning process through the instructional strategies they employ in the classroom. The level of achievement of students in school depends on the degree of effectiveness of instructional strategies used by the teacher. Thus, this study was conceptualized and conducted to examine the instructional strategies preferred and used by the Science and Mathematics teachers and the impact of those strategies in their teaching performance and students’ academic achievement in Science and Mathematics. The participants of the study comprised a total enumeration of 61 teachers who were chosen through total enumeration and 610 students who were selected using two-stage random sampling technique. The descriptive correlation design was used in this study with a self-made questionnaire as the main tool in the data gathering procedure. Relationship among variables was tested and analyzed using Spearman Rank Correlation Coefficient and Wilcoxon Signed Rank statistics. The teacher participants under study mainly belonged to the age group of ‘young’ (35 years and below) and most were females having ‘very much experienced’ (16 years and above) in teaching. Teaching performance was found to be ‘very satisfactory’ while academic achievement in Science and Mathematics was found to be ‘satisfactory’. Demographic profile and teaching performance of teacher participants were found to be ‘not significant’ to their instructional strategy preferences. Results implied that age, sex, level of education and length of service of the teachers does not affect their preference on a particular instructional strategy. However, the teacher participants’ extent of use of the different instructional strategies was found to be ‘significant’ to their teaching performance. The instructional strategies being used by the teachers were found to have a direct effect on their teaching performance. Academic achievement of student participants was found to be ‘significant’ to the teacher participants’ instructional strategy preferences. The preference of the teachers on instructional strategies had a significant effect on the students’ academic performance. On the other hand, teacher participants’ extent of use of instructional strategies was showed to be ‘not significant’ to the academic achievement of students in Science and Mathematics. The instructional strategy being used by the teachers did not affect the level of performance of students in Science and Mathematics. The results of the study revealed that there was a significant difference between the teacher participants’ preference of instructional strategy and the student participants’ instructional strategy preference as well as between teacher participants’ extent of use and student participants’ perceived level of use of the different instructional strategies. Findings found a discrepancy between the teaching strategy preferences of students and strategies implemented by teachers.Keywords: academic achievement, extent of use, instructional strategy, preferences
Procedia PDF Downloads 313376 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 252375 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 22374 Autophagy Acceleration and Self-Healing by the Revolution against Frequent Eating, High Glycemic and Unabsorbable Substances as One Meal a Day Plan
Authors: Reihane Mehrparvar
Abstract:
Human age could exceed further by altering gene expression through food intaking, although as a consequence of recent century eating patterns, human life-span getting shorter by emerging irregulating in autophagy mechanism, insulin, leptin, gut microbiota which are important etiological factors of type-2 diabetes, obesity, infertility, cancer, metabolic and autoimmune diseases. However, restricted calorie intake and vigorous exercise might be beneficial for losing weight and metabolic regulation in a short period but could not be implementable in the long term as a way of life. Therefore, the lack of a dietary program that is compatible with the genes of the body is essential. Sweet and high-glycemic-index (HGI) foods were associated with type-2 diabetes and cancer morbidity. The neuropsychological perspective characterizes the inclination of sweet and HGI-food consumption as addictive behavior; hence this process engages preference of gut microbiota, neural node, and dopaminergic functions. Moreover, meal composition is not the only factor that affects body hemostasis. In this narrative review, it is believed to attempt to investigate how the body responded to different food intakes and represent an accurate model based on current evidence. Eating frequently and ingesting unassimilable protein and carbohydrates may not be compatible with human genes and could cause impairments in the self-renovation mechanism. This trajectory indicates our body is more adapted to starvation and eating animal meat and marrow. Here has been recommended a model that takes into account three important factors: frequent eating, meal composition, and circadian rhythm, which may offer a promising intervention for obesity, inflammation, cardiovascular, autoimmune disorder, type-2 diabetes, insulin resistance, infertility, and cancer through intensifying autophagy-mechanism and eliminate medical costs.Keywords: metabolic disease, anti-aging, type-2 diabetes, autophagy
Procedia PDF Downloads 81373 Automated, Objective Assessment of Pilot Performance in Simulated Environment
Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt
Abstract:
Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).Keywords: automated assessment, flight simulator, human factors, pilot training
Procedia PDF Downloads 150372 Syntheses in Polyol Medium of Inorganic Oxides with Various Smart Optical Properties
Authors: Shian Guan, Marie Bourdin, Isabelle Trenque, Younes Messaddeq, Thierry Cardinal, Nicolas Penin, Issam Mjejri, Aline Rougier, Etienne Duguet, Stephane Mornet, Manuel Gaudon
Abstract:
At the interface of the studies performed by 3 Ph.D. students: Shian Guan (2017-2020), Marie Bourdin (2016-2019) and Isabelle Trenque (2012-2015), a single synthesis route: polyol-mediated process, was used with success for the preparation of different inorganic oxides. Both of these inorganic oxides were elaborated for their potential application as smart optical compounds. This synthesis route has allowed us to develop nanoparticles of zinc oxide, vanadium oxide or tungsten oxide. This route is with easy implementation, inexpensive and with large-scale production potentialities and leads to materials of high purity. The obtaining by this route of nanometric particles, however perfectly crystalline, has notably led to the possibility of doping these matrix materials with high doping ion concentrations (high solubility limits). Thus, Al3+ or Ga3+ doped-ZnO powder, with high doping rate in comparison with the literature, exhibits remarkable infrared absorption properties thanks to their high free carrier density. Note also that due to the narrow particle size distribution of the as-prepared nanometric doped-ZnO powder, the original correlation between crystallite size and unit-cell parameters have been established. Also, depending on the annealing atmosphere use to treat vanadium precursors, VO2, V2O3 or V2O5 oxides with thermochromic or electrochromic properties can be obtained without any impurity, despite the versatility of the oxidation state of vanadium. This is of more particular interest on vanadium dioxide, a relatively difficult-to-prepare oxide, whose first-order metal-insulator phase transition is widely explored in the literature for its thermochromic behavior (in smart windows with optimal thermal insulation). Finally, the reducing nature of the polyol solvents ensures the production of oxygen-deficient tungsten oxide, thus conferring to the nano-powders exotic colorimetric properties, as well as optimized photochromic and electrochromic behaviors.Keywords: inorganic oxides, electrochromic, photochromic, thermochromic
Procedia PDF Downloads 221371 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 431370 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 56369 Changing Emphases in Mental Health Research Methodology: Opportunities for Occupational Therapy
Authors: Jeffrey Chase
Abstract:
Historically the profession of Occupational Therapy was closely tied to the treatment of those suffering from mental illness; more recently, and especially in the U.S., the percentage of OTs identifying as working in the mental health area has declined significantly despite the estimate that by 2020 behavioral health disorders will surpass physical illnesses as the major cause of disability worldwide. In the U.S. less than 10% of OTs identify themselves as working with the mentally ill and/or practicing in mental health settings. Such a decline has implications for both those suffering from mental illness and the profession of Occupational Therapy. One reason cited for the decline of OT in mental health has been the limited research in the discipline addressing mental health practice. Despite significant advances in technology and growth in the field of neuroscience, major institutions and funding sources such as the National Institute of Mental Health (NIMH) have noted that research into the etiology and treatment of mental illness have met with limited success over the past 25 years. One major reason posited by NIMH is that research has been limited by how we classify individuals, that being mostly on what is observable. A new classification system being developed by NIMH, the Research Domain Criteria (RDoc), has the goal to look beyond just descriptors of disorders for common neural, genetic, and physiological characteristics that cut across multiple supposedly separate disorders. The hope is that by classifying individuals along RDoC measures that both reliability and validity will improve resulting in greater advances in the field. As a result of this change NIH and NIMH will prioritize research funding to those projects using the RDoC model. Multiple disciplines across many different setting will be required for RDoC or similar classification systems to be developed. During this shift in research methodology OT has an opportunity to reassert itself into the research and treatment of mental illness, both in developing new ways to more validly classify individuals, and to document the legitimacy of previously ill-defined and validated disorders such as sensory integration.Keywords: global mental health and neuroscience, research opportunities for ot, greater integration of ot in mental health research, research and funding opportunities, research domain criteria (rdoc)
Procedia PDF Downloads 275368 Fraud in the Higher Educational Institutions in Assam, India: Issues and Challenges
Authors: Kalidas Sarma
Abstract:
Fraud is a social problem changing with social change and it has a regional and global impact. Introduction of private domain in higher education along with public institutions has led to commercialization of higher education which encourages unprecedented mushrooming of private institutions resulting in fraudulent activities in higher educational institutions in Assam, India. Presently, fraud has been noticed in in-service promotion, fake entry qualification by teachers in different levels of work-place by using fake master degrees, master of philosophy and doctor of philosophy degree certificates. The aim and objective of the study are to identify grey areas in maintenance of quality in higher educational institutions in Assam and also to draw the contour for planning and implementation. This study is based on both primary and secondary data collected through questionnaire and seeking information through Right to Information Act 2005. In Assam, there are 301 undergraduate and graduate colleges distributed in 27 (Twenty seven) administrative districts with 11000 (Eleven thousand) college teachers. Total 421 (Four hundred twenty one) college teachers from the 14 respondent colleges have been taken for analysis. Data collected has been analyzed by using 'Hypertext Pre-processor' (PhP) application with My Sequel Structure Query Language (MySQL) and Google Map Application Programming Interface (APIs). Graph has been generated by using open source tool Chart.js. Spatial distribution maps have been generated with the help of geo-references of the colleges. The result shows: (i) the violation of University Grants Commission's (UGCs) Regulation for the awards of M. Phil/Ph.D. clearly exhibits. (ii) There is a gap between apex regulatory bodies of higher education at national and as well as state level to check fraud. (iii) Mala fide 'No Objection Certificate' (NOC) issued by the Government of Assam have played pivotal role in the occurrence of fraudulent practices in higher educational institutions of Assam. (iv) Violation of verdict of the Hon'ble Supreme Court of India regarding territorial jurisdiction of Universities for the awards of Ph.D. and M. Phil degrees in distance mode/study centre is also a responsible factor for the spread of these academic frauds in Assam and other states. The challenges and mitigation of these issues have been discussed.Keywords: Assam, fraud, higher education, mitigation
Procedia PDF Downloads 169367 Hydrothermal Aging Behavior of Continuous Carbon Fiber Reinforced Polyamide 6 Composites
Authors: Jifeng Zhang , Yongpeng Lei
Abstract:
Continuous carbon fiber reinforced polyamide 6 (CF/PA6) composites are potential for application in the automotive industry due to their high specific strength and stiffness. However, PA6 resin is sensitive to the moisture in the hydrothermal environment and CF/PA6 composites might undergo several physical and chemical changes, such as plasticization, swelling, and hydrolysis, which induces a reduction of mechanical properties. So far, little research has been reported on the assessment of the effects of hydrothermal aging on the mechanical properties of continuous CF/PA6 composite. This study deals with the effects of hydrothermal aging on moisture absorption and mechanical properties of polyamide 6 (PA6) and polyamide 6 reinforced with continuous carbon fibers composites (CF/PA6) by immersion in distilled water at 30 ℃, 50 ℃, 70 ℃, and 90 ℃. Degradation of mechanical performance has been monitored, depending on the water absorption content and the aging temperature. The experimental results reveal that under the same aging condition, the PA6 resin absorbs more water than the CF/PA6 composite, while the water diffusion coefficient of CF/PA6 composite is higher than that of PA6 resin because of interfacial diffusion channel. In mechanical properties degradation process, an exponential reduction in tensile strength and elastic modulus are observed in PA6 resin as aging temperature and water absorption content increases. The degradation trend of flexural properties of CF/PA6 is the same as that of tensile properties of PA6 resin. Moreover, the water content plays a decisive role in mechanical degradation compared with aging temperature. In contrast, hydrothermal environment has mild effect on the tensile properties of CF/PA6 composites. The elongation at breakage of PA6 resin and CF/PA6 reaches the highest value when their water content reaches 6% and 4%, respectively. Dynamic mechanical analysis (DMA) and scanning electron microscope (SEM) were also used to explain the mechanism of mechanical properties alteration. After exposed to the hydrothermal environment, the Tg (glass transition temperature) of samples decreases dramatically with water content increase. This reduction can be ascribed to the plasticization effect of water. For the unaged specimens, the fibers surface is coated with resin and the main fracture mode is fiber breakage, indicating that a good adhesion between fiber and matrix. However, with absorbed water content increasing, the fracture mode transforms to fiber pullout. Finally, based on Arrhenius methodology, a predictive model with relate to the temperature and water content has been presented to estimate the retention of mechanical properties for PA6 and CF/PA6.Keywords: continuous carbon fiber reinforced polyamide 6 composite, hydrothermal aging, Arrhenius methodology, interface
Procedia PDF Downloads 122