Search results for: quantum computation
226 Insight into the Electrocatalytic Activities of Nitrogen-Doped Graphyne and Graphdiyne Families: A First-Principles Study
Authors: Bikram K. Das, Kalyan K. Chattopadhyay
Abstract:
The advent of 2-D materials in the last decade has induced a fresh spur of growth in fuel cell technology as these materials have some highly promising traits that can be exploited to felicitate Oxygen Reduction Reaction (ORR) in an efficient way. Among the various 2-D carbon materials, graphyne (Gy) and graphdiyne (Gdy)1 with their intrinsic non-uniform charge distribution holds promises in this purpose and it is expected2 that substitutional Nitrogen (N) doping could further enhance their efficiency. In this regard, dispersive force corrected density functional theory is used to map the oxygen reduction reaction (ORR) kinetics of five different kinds of N doped graphyne and graphdiyne systems (namely αGy, βGy, γGy, RGy and 6,6,12Gy and Gdy) in alkaline medium. The best doping site for each of the Gy/ Gdy system is determined comparing the formation energies of the possible doping configurations. Similarly, the best di-oxygen (O₂) adsorption sites for the doped systems are identified by comparing the adsorption energies. O₂ adsorption on all N doped Gy/ Gdy systems is found to be energetically favorable. ORR on a catalyst surface may occur either via the Eley-Rideal (ER) or the Langmuir–Hinschelwood (LH) pathway. Systematic studies performed on the considered systems reveal that all of them favor the ER pathway. Further, depending on the nature of di-oxygen adsorption ORR can follow either associative or dissociative mechanism; the possibility of occurrence of both the mechanisms is tested thoroughly for each N doped Gy/ Gdy. For the ORR process, all the Gy/Gdy systems are observed to prefer the efficient four-electron pathway but the expected monotonically exothermic reaction pathway is found only for N doped 6,6,12Gy and RGy following the associative pathway and for N doped βGy, γGy and Gdy following the dissociative pathway. Further computation performed for these systems reveals that for N doped 6,6,12Gy, RGy, βGy, γGy and Gdy the overpotentials are 1.08 V, 0.94 V, 1.17 V, 1.21 V and 1.04 V respectively depicting N doped RGy is the most promising material, to carry out ORR in alkaline medium, among the considered ones. The stability of the ORR intermediate states with the variation of pH and electrode potentials is further explored with Pourbiax diagrams and the activities of these systems in the alkaline medium are compared with the prior reported B/N doped identical systems for ORR in an acidic medium in terms of a common descriptor.Keywords: graphdiyne, graphyne, nitrogen-doped, ORR
Procedia PDF Downloads 128225 Investigation of Polymer Solar Cells Degradation Behavior Using High Defect States Influence Over Various Polymer Absorber Layers
Authors: Azzeddine Abdelalim, Fatiha Rogti
Abstract:
The degradation phenomenon in polymer solar cells (PCSs) has not been clearly explained yet. In fact, there are many causes that show up and influence these cells in a variety of ways. Also, there has been a growing concern over this degradation in the photovoltaic community. One of the main variables deciding PSCs photovoltaic output is defect states. In this research, devices modeling is carried out to analyze the multiple effects of degradation by applying high defect states (HDS) on ideal PSCs, mainly poly(3-hexylthiophene) (P3HT) absorber layer. Besides, a comparative study is conducted between P3HT and other PSCs by a simulation program called Solar Cell Capacitance Simulator (SCAPS). The adjustments to the defect parameters in several absorber layers explain the effect of HDS on the total output properties of PSCs. The performance parameters for HDS, quantum efficiency, and energy band were therefore examined. This research attempts to explain the degradation process of PSCs and the causes of their low efficiency. It was found that the defects often affect PSCs performance, but defect states have a little effect on output when the defect level is less than 1014cm-3, which gives similar performance values with P3HT cells when these defects is about 1019cm-3. The high defect states can cause up to 11% relative reduction in conversion efficiency of ideal P3HT. In the center of the band gap, defect states become more noxious. This approach is for one of the degradation processes potential of PSCs especially that use fullerene derivative acceptors.Keywords: degradation, high defect states, polymer solar cells, SCAPS-1D
Procedia PDF Downloads 91224 Quasiperiodic Magnetic Chains as Spin Filters
Authors: Arunava Chakrabarti
Abstract:
A one-dimensional chain of magnetic atoms, representative of a quantum gas in an artificial quasi-periodic potential and modeled by the well-known Aubry-Andre function and its variants are studied in respect of its capability of working as a spin filter for arbitrary spins. The basic formulation is explained in terms of a perfectly periodic chain first, where it is shown that a definite correlation between the spin S of the incoming particles and the magnetic moment h of the substrate atoms can open up a gap in the energy spectrum. This is crucial for a spin filtering action. The simple one-dimensional chain is shown to be equivalent to a 2S+1 strand ladder network. This equivalence is exploited to work out the condition for the opening of gaps. The formulation is then applied for a one-dimensional chain with quasi-periodic variation in the site potentials, the magnetic moments and their orientations following an Aubry-Andre modulation and its variants. In addition, we show that a certain correlation between the system parameters can generate absolutely continuous bands in such systems populated by Bloch like extended wave functions only, signaling the possibility of a metal-insulator transition. This is a case of correlated disorder (a deterministic one), and the results provide a non-trivial variation to the famous Anderson localization problem. We have worked within a tight binding formalism and have presented explicit results for the spin half, spin one, three halves and spin five half particles incident on the magnetic chain to explain our scheme and the central results.Keywords: Aubry-Andre model, correlated disorder, localization, spin filter
Procedia PDF Downloads 356223 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 251222 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors
Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar
Abstract:
Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.Keywords: electrophoretic deposition (EPD), graphene oxide (GO), electrical conductivity, electro-optical devices
Procedia PDF Downloads 190221 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data
Authors: Saeid Gharechelou, Ryutaro Tateishi
Abstract:
Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake
Procedia PDF Downloads 172220 On Cold Roll Bonding of Polymeric Films
Authors: Nikhil Padhye
Abstract:
Recently a new phenomenon for bonding of polymeric films in solid-state, at ambient temperatures well below the glass transition temperature of the polymer, has been reported. This is achieved by bulk plastic compression of polymeric films held in contact. Here we analyze the process of cold-rolling of polymeric films via finite element simulations and illustrate a flexible and modular experimental rolling-apparatus that can achieve bonding of polymeric films through cold-rolling. Firstly, the classical theory of rolling a rigid-plastic thin-strip is utilized to estimate various deformation fields such as strain-rates, velocities, loads etc. in rolling the polymeric films at the specified feed-rates and desired levels of thickness-reduction(s). Predicted magnitudes of slow strain-rates, particularly at ambient temperatures during rolling, and moderate levels of plastic deformation (at which Bauschinger effect can be neglected for the particular class of polymeric materials studied here), greatly simplifies the task of material modeling and allows us to deploy a computationally efficient, yet accurate, finite deformation rate-independent elastic-plastic material behavior model (with inclusion of isotropic-hardening) for analyzing the rolling of these polymeric films. The interfacial behavior between the roller and polymer surfaces is modeled using Coulombic friction; consistent with the rate-independent behavior. The finite deformation elastic-plastic material behavior based on (i) the additive decomposition of stretching tensor (D = De + Dp, i.e. a hypoelastic formulation) with incrementally objective time integration and, (ii) multiplicative decomposition of deformation gradient (F = FeFp) into elastic and plastic parts, are programmed and carried out for cold-rolling within ABAQUS Explicit. Predictions from both the formulations, i.e., hypoelastic and multiplicative decomposition, exhibit a close match. We find that no specialized hyperlastic/visco-plastic model is required to describe the behavior of the blend of polymeric films, under the conditions described here, thereby speeding up the computation process .Keywords: Polymer Plasticity, Bonding, Deformation Induced Mobility, Rolling
Procedia PDF Downloads 189219 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 58218 Interactive Glare Visualization Model for an Architectural Space
Authors: Florina Dutt, Subhajit Das, Matthew Swartz
Abstract:
Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis
Procedia PDF Downloads 350217 Comparison of Methods for the Synthesis of Eu+++, Tb+++, and Tm+++ Doped Y2O3 Nanophosphors by Sol-Gel and Hydrothermal Methods for Bioconjugation
Authors: Ravindra P. Singh, Drupad Ram, Dinesh K. Gupta
Abstract:
Rare earth ions doped metal oxides are a class of luminescent materials which have been proved to be excellent for applications in field emission displays and cathode ray tubes, plasma display panels. Under UV irradiation Eu+++ doped Y2O3 is a red phosphor and Tb+++ doped Y 2O3 is a green phosphor. It is possible that, due to their high quantum efficiency, they might serve as improved luminescent markers for identification of biomolecules, as already reported for CdSe and CdSe/ZnS nanocrystals. However, for any biological applications these particle powders must be suspended in water while retaining their phosphorescence. We hereby report synthesis and characterization of Eu+++ and Tb+++ doped yttrium oxide nanoparticles by sol-gel and hydrothermal processes. Eu+++ and Tb+++ doped Y2O3 nanoparticles have been synthesized by hydrothermal process using yttrium oxo isopropoxide [Y5O(OPri)13] (crystallized twice) and it’s acetyl acetone modified product [Y(O)(acac)] as precursors. Generally the sol-gel derived metal oxides are required to be annealed to the temperature ranging from 400°C-800°C in order to develop crystalline phases. However, this annealing also results in the development of aggregates which are undesirable for bio-conjugation experiments. In the hydrothermal process, we have achieved crystallinity of the nanoparticles at 300°C and the development of crystalline phases has been found to be proportional to the time of heating of the reactor. The average particle sizes as calculated from XRD were found to be 28 nm, 32 nm, and 34 nm by hydrothermal process. The particles were successfully suspended in chloroform in the presence of trioctyl phosphene oxide and TEM investigations showed the presence of single particles along with agglomerates.Keywords: nanophosphors, Y2O3:Eu+3, Y2O3:Tb+3, sol-gel, hydrothermal method, TEM, XRD
Procedia PDF Downloads 402216 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 92215 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: control, milling, multibody, robotic, simulation
Procedia PDF Downloads 248214 Urban Impervious and its Impact on Storm Water Drainage Systems
Authors: Ratul Das, Udit Narayan Das
Abstract:
Surface imperviousness in urban area brings significant changes in storm water drainage systems and some recent studies reveals that the impervious surfaces that passes the storm water runoff directly to drainage systems through storm water collection systems, called directly connected impervious area (DCIA) is an effective parameter rather than total impervious areas (TIA) for computation of surface runoff. In the present study, extension of DCIA and TIA were computed for a small sub-urban area of Agartala, the capital of state Tripura. Total impervious surfaces covering the study area were identified on the existing storm water drainage map from landuse map of the study area in association with field assessments. Also, DCIA assessed through field survey were compared to DCIA computed by empirical relationships provided by other investigators. For the assessment of DCIA in the study area two methods were adopted. First, partitioning the study area into four drainage sub-zones based on average basin slope and laying of existing storm water drainage systems. In the second method, the entire study area was divided into small grids. Each grid or parcel comprised of 20m× 20m area. Total impervious surfaces were delineated from landuse map in association with on-site assessments for efficient determination of DCIA within each sub-area and grid. There was a wide variation in percent connectivity of TIA across each sub-drainage zone and grid. In the present study, total impervious area comprises 36.23% of the study area, in which 21.85% of the total study area is connected to storm water collection systems. Total pervious area (TPA) and others comprise 53.20% and 10.56% of the total area, respectively. TIA recorded by field assessment (36.23%) was considerably higher than that calculated from the available land use map (22%). From the analysis of recoded data, it is observed that the average percentage of connectivity (% DCIA with respect to TIA) is 60.31 %. The analysis also reveals that the observed DCIA lies below the line of optimal impervious surface connectivity for a sub-urban area provided by other investigators and which indicate the probable reason of water logging conditions in many parts of the study area during monsoon period.Keywords: Drainage, imperviousness, runoff, storm water.
Procedia PDF Downloads 350213 Emptiness Downlink and Uplink Proposal Using Space-Time Equation Interpretation
Authors: Preecha Yupapin And Somnath
Abstract:
From the emptiness, the vibration induces the fractal, and the strings are formed. From which the first elementary particle groups, known as quarks, were established. The neutrino and electron are created by them. More elementary particles and life are formed by organic and inorganic substances. The universe is constructed, from which the multi-universe has formed in the same way. universe assumes that the intense energy has escaped from the singularity cone from the multi-universes. Initially, the single mass energy is confined, from which it is disturbed by the space-time distortion. It splits into the entangled pair, where the circular motion is established. It will consider one side of the entangled pair, where the fusion energy of the strong coupling force has formed. The growth of the fusion energy has the quantum physic phenomena, where the moving of the particle along the circumference with a speed faster than light. It introduces the wave-particle duality aspect, which will be saturated at the stopping point. It will be re-run again and again without limitation, which can say that the universe has been created and expanded. The Bose-Einstein condensate (BEC) is released through the singularity by the wormhole, which will be condensed to become a mass associated with the Sun's size. It will circulate(orbit) along the Sun. the consideration of the uncertainty principle is applied, from which the breath control is followed by the uncertainty condition ∆p∆x=∆E∆t~ℏ. The flowing in-out air into a body via a nose has applied momentum and energy control respecting the movement and time, in which the target is that the distortion of space-time will have vanished. Finally, the body is clean which can go to the next procedure, where the mind can escape from the body by the speed of light. However, the borderline between contemplation to being an Arahant is a vacuum, which will be explained.Keywords: space-time, relativity, enlightenment, emptiness
Procedia PDF Downloads 67212 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method
Authors: Lina Wu, Jia Liu, Ye Li
Abstract:
The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map
Procedia PDF Downloads 274211 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 152210 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”
Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen
Abstract:
Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval
Procedia PDF Downloads 170209 Green approach of Anticorrosion Coating of Steel Based on Polybenzoxazine/Henna Nanocomposites
Authors: Salwa M. Elmesallamy, Ahmed A. Farag, Magd M. Badr, Dalia S. Fathy, Ahmed Bakry, Mona A. El-Etre
Abstract:
The term green environment is an international trend. It is become imperative to treat the corrosion of steel with a green coating to protect the environment. From the potential adverse effects of the traditional materials.A series of polybenzoxazine/henna composites (PBZ/henna), with different weight percent (3,5, and 7 wt % (of henna), were prepared for corrosion protection of carbon steel. The structures of the prepared composites were verified using FTIR analysis. The mechanical properties of the resins, such as adhesion, hardness, binding, and tensile strength, were also measured. It was found that the tensile strength increases by henna loading up to 25% higher than the tidy resin. The thermal stability was investigated by thermogravimetric analysis (TGA) the loading of lawsone (henna) molecules into the PBZ matrix increases the thermal stability of the composite. UV stability was tested by the UV weathering accelerator to examine the possibility that henna can also act as an aging UV stabilizer. The effect of henna content on the corrosion resistance of composite coatings was tested using potentiostatic polarization and electrochemical spectroscopy. The presence of henna in the coating matrix enhances the protection efficiency of polybenzoxazine coats. Increasing henna concentration increases the protection efficiency of composites. The quantum chemical calculations for polybenzoxazine/henna composites have resulted that the highest corrosion inhibition efficiency, has the highest EHOMO and lowest ELUMO; which is in good agreement with results obtained from experiments.Keywords: polybenzoxazine, corrosion, green chemistry, carbon steel
Procedia PDF Downloads 95208 Electrochemical Behavior of Cocaine on Carbon Paste Electrode Chemically Modified with Cu(II) Trans 3-MeO Salcn Complex
Authors: Alex Soares Castro, Matheus Manoel Teles de Menezes, Larissa Silva de Azevedo, Ana Carolina Caleffi Patelli, Osmair Vital de Oliveira, Aline Thais Bruni, Marcelo Firmino de Oliveira
Abstract:
Considering the problem of the seizure of illicit drugs, as well as the development of electrochemical sensors using chemically modified electrodes, this work shows the study of the electrochemical activity of cocaine in carbon paste electrode chemically modified with Cu (II) trans 3-MeO salcn complex. In this context, cyclic voltammetry was performed on 0.1 mol.L⁻¹ KCl supporting electrolyte at a scan speed of 100 mV s⁻¹, using an electrochemical cell composed of three electrodes: Ag /AgCl electrode (filled KCl 3 mol.L⁻¹) from Metrohm® (reference electrode); a platinum spiral electrode, as an auxiliary electrode, and a carbon paste electrode chemically modified with Cu (II) trans 3-MeO complex (as working electrode). Two forms of cocaine were analyzed: cocaine hydrochloride (pH 3) and cocaine free base form (pH 8). The PM7 computational method predicted that the hydrochloride form is more stable than the free base form of cocaine, so with cyclic voltammetry, we found electrochemical signal only for cocaine in the form of hydrochloride, with an anodic peak at 1.10 V, with a linearity range between 2 and 20 μmol L⁻¹ had LD and LQ of 2.39 and 7.26x10-5 mol L⁻¹, respectively. The study also proved that cocaine is adsorbed on the surface of the working electrode, where through an irreversible process, where only anode peaks are observed, we have the oxidation of cocaine, which occurs in the hydrophilic region due to the loss of two electrons. The mechanism of this reaction was confirmed by the ab-inito quantum method.Keywords: ab-initio computational method, analytical method, cocaine, Schiff base complex, voltammetry
Procedia PDF Downloads 194207 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 72206 InP Nanocrystals Core and Surface Electronic Structure from Ab Initio Calculations
Authors: Hamad R. Jappor, Zeyad Adnan Saleh, Mudar A. Abdulsattar
Abstract:
The ab initio restricted Hartree-Fock method is used to simulate the electronic structure of indium phosphide (InP) nanocrystals (NCs) (216-738 atoms) with sizes ranging up to about 2.5 nm in diameter. The calculations are divided into two parts, surface, and core. The oxygenated (001)-(1×1) facet that expands with larger sizes of nanocrystals is investigated to determine the rule of the surface in nanocrystals electronic structure. Results show that lattice constant and ionicity of the core part show decreasing order as nanocrystals grow up in size. The smallest investigated nanocrystal is 1.6% larger in lattice constant and 131.05% larger in ionicity than the converged value of largest investigated nanocrystal. Increasing nanocrystals size also resulted in an increase of core cohesive energy (absolute value), increase of core energy gap, and increase of core valence. The surface states are found mostly non-degenerated because of the effect of surface discontinuity and oxygen atoms. Valence bandwidth is wider on the surface due to splitting and oxygen atoms. The method also shows fluctuations in the converged energy gap, valence bandwidth and cohesive energy of core part of nanocrystals duo to shape variation. The present work suggests the addition of ionicity and lattice constant to the quantities that are affected by quantum confinement phenomenon. The method of the present model has threefold results; it can be used to approach the electronic structure of crystals bulk, surface, and nanocrystals.Keywords: InP, nanocrystals core, ionicity, Hartree-Fock method, large unit cell
Procedia PDF Downloads 399205 Mg and MgN₃ Cluster in Diamond: Quantum Mechanical Studies
Authors: T. S. Almutairi, Paul May, Neil Allan
Abstract:
The geometrical, electronic and magnetic properties of the neutral Mg center and MgN₃ cluster in diamond have been studied theoretically in detail by means of an HSE06 Hamiltonian that includes a fraction of the exact exchange term; this is important for a satisfactory picture of the electronic states of open-shell systems. Another batch of the calculations by GGA functionals have also been included for comparison, and these support the results from HSE06. The local perturbations in the lattice by introduced Mg defect are restricted in the first and second shell of atoms before eliminated. The formation energy calculated with HSE06 and GGA of single Mg agrees with the previous result. We found the triplet state with C₃ᵥ is the ground state of Mg center with energy lower than the singlet with C₂ᵥ by ~ 0.1 eV. The recent experimental ZPL (557.4 nm) of Mg center in diamond has been discussed in the view of present work. The analysis of the band-structure of the MgN₃ cluster confirms that the MgN₃ defect introduces a shallow donor level in the gap lying within the conduction band edge. This observation is supported by the EMM that produces n-type levels shallower than the P donor level. The formation energy of MgN₂ calculated from a 2NV defect (~ 3.6 eV) is a promising value from which to engineer MgN₃ defects inside the diamond. Ion-implantation followed by heating to about 1200-1600°C might induce migration of N related defects to the localized Mg center. Temperature control is needed for this process to restore the damage and ensure the mobilities of V and N, which demands a more precise experimental study.Keywords: empirical marker method, generalised gradient approximation, Heyd–Scuseria–Ernzerhof screened hybrid functional, zero phono line
Procedia PDF Downloads 115204 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 203203 Exploring the ‘Many Worlds’ Interpretation in Both a Philosophical and Creative Literary Framework
Authors: Jane Larkin
Abstract:
Combining elements of philosophy, science, and creative writing, this investigation explores how a philosophically structured science-fiction novel can challenge the theory of linearity and singularity of time through the ‘many worlds’ theory. This concept is addressed through the creation of a research exegesis and accompanying creative artefact, designed to be read in conjunction with each other in an explorative, interwoven manner. Research undertaken into scientific concepts, such as the ‘many worlds’ interpretation of quantum mechanics and diverse philosophers and their ideologies on time, is embodied in an original science-fiction narrative titled, It Goes On. The five frames that make up the creative artefact are enhanced not only by five leading philosophers and their philosophies on time but by an appreciation of the research, which comes first in the paper. Research into traditional approaches to storytelling is creatively and innovatively inverted in several ways, thus challenging the singularity and linearity of time. Further nonconventional approaches to literary techniques include an abstract narrator, embodied by time, a concept, and a figure in the text, whose voice and vantage point in relation to death furthers the unreliability of the notion of time. These further challenge individuals’ understanding of complex scientific and philosophical views in a variety of ways. The science-fiction genre is essential when considering the speculative nature of It Goes On, which deals with parallel realities and is a fantastical exploration of human ingenuity in plausible futures. Therefore, this paper documents the research-led methodology used to create It Goes On, the application of the ‘many worlds’ theory within a framed narrative, and the many innovative techniques used to contribute new knowledge in a variety of fields.Keywords: time, many-worlds theory, Heideggerian philosophy, framed narrative
Procedia PDF Downloads 84202 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 153201 Implementation of Lean Tools (Value Stream Mapping and ECRS) in an Oil Refinery
Authors: Ronita Singh, Yaman Pattanaik, Soham Lalwala
Abstract:
In today’s highly competitive business environment, every organization is striving towards lean manufacturing systems to achieve lower Production Lead Times, lower costs, less inventory and overall improvement in supply chains efficiency. Based on the similar idea, this paper presents the practical application of Value Stream Mapping (VSM) tool and ECRS (Eliminate, Combine, Reduce, and Simplify) technique in the receipt section of the material management center of an oil refinery. A value stream is an assortment of all actions (value added as well as non-value added) that are required to bring a product through the essential flows, starting with raw material and ending with the customer. For drawing current state value stream mapping, all relevant data of the receipt cycle has been collected and analyzed. Then analysis of current state map has been done for determining the type and quantum of waste at every stage which helped in ascertaining as to how far the warehouse is from the concept of lean manufacturing. From the results achieved by current VSM, it was observed that the two processes- Preparation of GRN (Goods Receipt Number) and Preparation of UD (Usage Decision) are both bottle neck operations and have higher cycle time. This root cause analysis of various types of waste helped in designing a strategy for step-wise implementation of lean tools. The future state thus created a lean flow of materials at the warehouse center, reducing the lead time of the receipt cycle from 11 days to 7 days and increasing overall efficiency by 27.27%.Keywords: current VSM, ECRS, future VSM, receipt cycle, supply chain, VSM
Procedia PDF Downloads 315200 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks
Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft
Abstract:
Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.Keywords: autonomous agricultural machines, deep learning, safety, visual perception
Procedia PDF Downloads 395199 Thin Films of Glassy Carbon Prepared by Cluster Deposition
Authors: Hatem Diaf, Patrice Melinon, Antonio Pereira, Bernard Moine, Nicholas Blanchard, Florent Bourquard, Florence Garrelie, Christophe Donnet
Abstract:
Glassy carbon exhibits excellent biological compatibility with live tissues meaning it has high potential for applications in life science. Moreover, glassy carbon has interesting properties including 'high temperature resistance', hardness, low density, low electrical resistance, low friction, and low thermal resistance. The structure of glassy carbon has long been a subject of debate. It is now admitted that glassy carbon is 100% sp2. This term is a little bit confusing as long sp2 hybridization defined from quantum chemistry is related to both properties: threefold configuration and pi bonding (parallel pz orbitals). Using plasma laser deposition of carbon clusters combined with pulsed nano/femto laser annealing, we are able to synthesize thin films of glassy carbon of good quality (probed by G band/ D disorder band ratio in Raman spectroscopy) without thermal post annealing. A careful inspecting of Raman signal, plasmon losses and structure performed by HRTEM (High Resolution Transmission Electron Microscopy) reveals that both properties (threefold and pi orbitals) cannot coexist together. The structure of the films is compared to models including schwarzites based from negatively curved surfaces at the opposite of onions or fullerene-like structures with positively curved surfaces. This study shows that a huge collection of porous carbon named vitreous carbon with different structures can coexist.Keywords: glassy carbon, cluster deposition, coating, electronic structure
Procedia PDF Downloads 319198 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation
Authors: A. Raj Kumar, S. Bilaloglu
Abstract:
Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile
Procedia PDF Downloads 240197 Potential Contribution of Local Food Resources towards Sustainable Food Tourism in Nueva Vizcaya
Authors: Marvin Eslava
Abstract:
The over-arching aim of this research is to determine the potential contribution of local food resources to the tourism growth of Nueva Vizcaya. It reviews some of the underpinning concepts and to provide a set of considerations for stakeholders to maximize the opportunity of local food can offer to businesses and the wider community. The basis of the study is to develop a sustainable food tourism model for Nueva Vizcaya. For the purpose of this research, there were 60 total numbers of respondents classified as samples from a six municipality. The respondents of the study were stakeholder consisting of government official, local producers, businessman and Non-government organizations in the selected municipalities of Nueva Vizcaya. Stratified purposive sampling was the appropriate technique that was used to the local government officials and employees, NGOs including the businessmen who are associated with local food resources and local producers. The documentary study, focus group discussion and survey questionnaire was used in order to meet the objectives of the study. Kruskall Wallis test was used to test the variances the ratings of the participants. This was used in the computation of hypothesis. The study concluded that the province of Nueva Vizcaya is blessed for its rich farmlands and fertile mountain soil boasts to produce high quality agricultural products. It is a home of various different indigenous groups creating a wide range of local cuisine. The province has substantial local food development evidence by the various food tourism related resources, increase in facilities and celebrating food tourism related events. The local food resources provide extensive potential economic empowerment and help in building the identity of the province. In addition, the local food resources extensively enhance the agriculture sector and other attractions in the province. Finally, it helps to preserve the authenticity of the food culture and generated pride among all stakeholders extensively. All stakeholders have the same perception on the potential contribution of local food resources to the development of the province of Nueva Vizcaya. The public and private sectors are cognizant on their roles to support the production of local food resources in Nueva Vizcaya. Major challenges and barriers in the development of sustainable food tourism in Nueva Vizcaya include production or supply and marketing.Keywords: local food resources, contribution, food tourism, benefits
Procedia PDF Downloads 261