Search results for: artificial groundwater recharge
226 AI-Powered Conversation Tools - Chatbots: Opportunities and Challenges That Present to Academics within Higher Education
Authors: Jinming Du
Abstract:
With the COVID-19 pandemic beginning in 2020, many higher education institutions and education systems are turning to hybrid or fully distance online courses to maintain social distance and provide a safe virtual space for learning and teaching. However, the majority of faculty members were not well prepared for the shift to blended or distance learning. Communication frustrations are prevalent in both hybrid and full-distance courses. A systematic literature review was conducted by a comprehensive analysis of 1688 publications that focused on the application of the adoption of chatbots in education. This study aimed to explore instructors' experiences with chatbots in online and blended undergraduate English courses. Language learners are overwhelmed by the variety of information offered by many online sites. The recently emerged chatbots (e.g.: ChatGPT) are slightly superior in performance as compared to those traditional through previous technologies such as tapes, video recorders, and websites. The field of chatbots has been intensively researched, and new methods have been developed to demonstrate how students can best learn and practice a new language in the target language. However, it is believed that among the many areas where chatbots are applied, while chatbots have been used as effective tools for communicating with business customers, in consulting and targeting areas, and in the medical field, chatbots have not yet been fully explored and implemented in the field of language education. This issue is challenging enough for language teachers; they need to study and conduct research carefully to clarify it. Pedagogical chatbots may alleviate the perception of a lack of communication and feedback from instructors by interacting naturally with students through scaffolding the understanding of those learners, much like educators do. However, educators and instructors lack the proficiency to effectively operate this emerging AI chatbot technology and require comprehensive study or structured training to attain competence. There is a gap between language teachers’ perceptions and recent advances in the application of AI chatbots to language learning. The results of the study found that although the teachers felt that the chatbots did the best job of giving feedback, the teachers needed additional training to be able to give better instructions and to help them assist in teaching. Teachers generally perceive the utilization of chatbots to offer substantial assistance to English language instruction.Keywords: artificial intelligence in education, chatbots, education and technology, education system, pedagogical chatbot, chatbots and language education
Procedia PDF Downloads 66225 Save Balance of Power: Can We?
Authors: Swati Arun
Abstract:
The present paper argues that Balance of Power (BOP) needs to conjugate with certain contingencies like geography. It is evident that sea powers (‘insular’ for better clarity) are not balanced (if at all) in the same way as land powers. Its apparent that artificial insularity that the US has achieved reduces the chances of balancing (constant) and helps it maintain preponderance (variable). But how precise is this approach in assessing the dynamics between China’s rise and reaction of other powers and US. The ‘evolved’ theory can be validated by putting China and US in the equation. Systemic Relation between the nations was explained through the Balance of Power theory much before the systems theory was propounded. The BOP is the crux of functionality of ‘power relation’ dynamics which has played its role in the most astounding ways leading to situations of war and peace. Whimsical; but true that, the BOP has remained a complicated and indefinable concepts since Hans. Morganthau to Kenneth Waltz. A challenge of the BOP, however remains; “ that it has too many meanings”. In the recent times it has become evident that the myriad of expectations generated by BOP has not met the practicality of the current world politics. It is for this reason; the BoP has been replaced by Preponderance Theory (PT) to explain prevailing power situation. PT does provide an empirical reasoning for the success of this theory but fails in a abstract logical reasoning required for making a theory universal. Unipolarity clarifies the current system as one where balance of power has become redundant. It seems to reach beyond the contours of BoP, where a superpower does what it must to remain one. The centrality of this arguments pivots around - an exception, every time BOP fails to operate, preponderance of power emerges. PT does not sit well with the primary logic of a theory because it works on an exception. The evolution of such a pattern and system where BOP fails and preponderance emerges is absent. The puzzle here is- if BOP really has become redundant or it needs polishing. The international power structure changed from multipolar to bipolar to unipolar. BOP was looked at to provide inevitable logic behind such changes and answer the dilemma we see today- why US is unchecked, unbalanced? But why was Britain unchecked in 19th century and why China was unbalanced in 13th century? It is the insularity of the state that makes BOP reproduce “imbalance of power”, going a level up from off-shore balancer. This luxury of a state to maintain imbalance in the region of competition or threat is the causal relation between BOP’s and geography. America has applied imbalancing- meaning disequilibrium (in its favor) to maintain the regional balance so that over time the weaker does not get stronger and pose a competition. It could do that due to the significant parity present between the US and the rest.Keywords: balance of power, china, preponderance of power, US
Procedia PDF Downloads 278224 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 362223 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 387222 Optimized Deep Learning-Based Facial Emotion Recognition System
Authors: Erick C. Valverde, Wansu Lim
Abstract:
Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.Keywords: deep learning, face detection, facial emotion recognition, network optimization methods
Procedia PDF Downloads 118221 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India
Authors: Richa Raje, Anumol Antony
Abstract:
Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.Keywords: children, landscape design, learning environment, nature and play, outdoor play
Procedia PDF Downloads 124220 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 48219 Study of Chemical and Physical - Mechanical Properties Lime Mortar with Addition of Natural Resins
Authors: I. Poot-Ocejo, H. Silva-Poot, J. C. Cruz, A. Yeladaqui-Tello
Abstract:
Mexico has remarkable archaeological remains mainly in the Maya area, which are critical to the preservation of our cultural heritage, so the authorities have an interest in preserving and restoring these vestiges of the most original way, by employing techniques traditional, which has advantages such as compatibility, durability, strength, uniformity and chemical composition. Recent studies have confirmed the addition of natural resins extracted from the bark of trees, of which Brosium alicastrum (Ramon) has been the most evaluated, besides being one of the most abundant species in the vicinity of the archaeological sites, like that Manilkara Zapota (Chicozapote). Therefore, the objective is to determine if these resins are capable of being employed in archaeological restoration. This study shows the results of the chemical composition and physical-mechanical behavior of mortar mixtures eight made with commercial lime and off by hand, calcium sand, resins added with Brosium alicastrum (Ramon) and Manilkara zapota (Chicozapote), where determined and quantified properties and chemical composition of the resins by X-Ray Fluorescence (XRF), the pH of the material was determined, indicating that both resins are acidic (3.78 and 4.02), and the addition rate maximum was obtained from resins in water by means of ultrasonic baths pulses, being in the case of 10% Manilkara zapota, because it contains up to 40% rubber and for 40% alicastrum Brosium contain less rubber. Through quantitative methodology, the compressive strength 96 specimens of 5 cm x 5 cm x 5 cm of mortar binding, 72 with partial substitution of water mixed with natural resins in proportions 5 to 10% in the case was evaluated of Manilkara Zapota, for Brosium alicastrum 20 and 40%, and 12 artificial resin and 12 without additive (mortars witnesses). 24 specimens likewise glued brick with mortar, for testing shear adhesion was found where, then the microstructure more conducive additions was determined by SEM analysis were prepared sweep. The test results indicate that the addition Manilkara zapota resin in the proportion of 10% 1.5% increase in compressive strength and 1% with respect to adhesion, compared to the control without addition mortar; In the case of Brosium alicastrum results show that compressive strengths and adhesion were insignificant compared to those made with registered by Manilkara zapota mixtures. Mortars containing the natural resins have improvements in physical properties and increase the mechanical strength and adhesion, compared to those who do not, in addition to the components are chemically compatible, therefore have considered that can be employed in Archaeological restoration.Keywords: lime, mortar, natural resins, Manilkara zapota mixtures, Brosium alicastrum
Procedia PDF Downloads 371218 Hansen Solubility Parameter from Surface Measurements
Authors: Neveen AlQasas, Daniel Johnson
Abstract:
Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied filmsKeywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements
Procedia PDF Downloads 94217 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis
Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi
Abstract:
The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH researchKeywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis
Procedia PDF Downloads 89216 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 63215 Serological Evidence of Enzootic Bovine Leukosis in Dairy Cattle Herds in the United Arab Emirates
Authors: Nabeeha Hassan Abdel Jalil, Lulwa Saeed Al Badi, Mouza Ghafan Alkhyeli, Khaja Mohteshamuddin, Ahmad Al Aiyan, Mohamed Elfatih Hamad, Robert Barigye
Abstract:
The present study was done to elucidate the prevalence of enzootic bovine leucosis (EBL) in the UAE, the seroprevalence rates of EBL in dairy herds from the Al Ain area, Abu Dhabi (AD) and indigenous cattle at the Al Ain livestock market (AALM) were assessed. Of the 949 sera tested by ELISA, 657 were from adult Holstein-Friesians from three farms and 292 from indigenous cattle at the AALM. The level of significance between the proportions of seropositive cattle were analyzed by the Marascuilo procedure and questionnaire data on husbandry and biosecurity practices evaluated. Overall, the aggregated farm and AALM data demonstrated a seroprevalence of 25.9%, compared to 37.0% for the study farms, and 1.0% for the indigenous cattle. Additionally, the seroprevalence rates at farms #1, #2 and #3 were 54.7%, 0.0%, and 26.3% respectively. Except for farm #2 and the AALM, statistically significant differences were noted between the proportions of seropositive cattle for farms #1 and #2 (Critical Range or CR=0.0803), farms #1 and #3 (p=0.1069), and farms #2 and #3 (CR=0.0707), farm #1 and the AALM (CR=0.0819), and farm #3 and the AALM (CR=0.0726). Also, the proportions of seropositive animals on farm #1 were 9.8%, 59.8%, 29.3%, and 1.2% in the 12-36, 37-72, 73-108, and 109-144-mo-old age groups respectively compared to 21.5%, 60.8%, 15.2%, and 2.5% in the respective age groups for farm #2. On both farms and the AALM, the 37-72-mo-old age group showed the highest EBL seroprevalence rate while all the 57 cattle on farm #2 were seronegative. Additionally, farms #1 and #3 had 3,130 and 2,828 intensively managed Holstein-Friesian cattle respectively, and all animals were routinely immunized against several diseases except EBL. On both farms #1 and #3, artificial breeding was practiced using semen sourced from the USA, and USA and Canada respectively, all farms routinely quarantined new stock, and farm #1 previously imported dairy cattle from an unspecified country, and farm #3 from the Netherlands, Australia and South Africa. While farm #1 provided no information on animal nutrition, farm #3 cited using hay, concentrates, and ad lib water. To the authors’ best knowledge, this is the first serological evidence of EBL in the UAE and as previously reported, the seroprevalence rates are comparatively higher in the intensively managed dairy herds than in indigenous cattle. As two of the study farms previously sourced cattle and semen from overseas, biosecurity protocols need to be revisited to avoid inadvertent EBL incursion and the possibility of regional transboundary disease spread also needs to be assessed. After the proposed molecular studies have adduced additional data, the relevant UAE animal health authorities may need to develop evidence-based EBL control policies and programs.Keywords: cattle, enzootic bovine leukosis, seroprevalence, UAE
Procedia PDF Downloads 143214 A Green Process for Drop-In Liquid Fuels from Carbon Dioxide, Water, and Solar Energy
Authors: Jian Yu
Abstract:
Carbo dioxide (CO2) from fossil fuel combustion is a prime green-house gas emission. It can be mitigated by microalgae through conventional photosynthesis. The algal oil is a feedstock of biodiesel, a carbon neutral liquid fuel for transportation. The conventional CO2 fixation, however, is quite slow and affected by the intermittent solar irradiation. It is also a technical challenge to reform the bio-oil into a drop-in liquid fuel that can be directly used in the modern combustion engines with expected performance. Here, an artificial photosynthesis system is presented to produce a biopolyester and liquid fuels from CO2, water, and solar power. In this green process, solar energy is captured using photovoltaic modules and converted into hydrogen as a stable energy source via water electrolysis. The solar hydrogen is then used to fix CO2 by Cupriavidus necator, a hydrogen-oxidizing bacterium. Under the autotrophic conditions, CO2 was reduced to glyceraldehyde-3-phosphate (G3P) that is further utilized for cell growth and biosynthesis of polyhydroxybutyrate (PHB). The maximum cell growth rate reached 10.1 g L-1 day-1, about 25 times faster than that of a typical bio-oil-producing microalga (Neochloris Oleoabundans) under stable indoor conditions. With nitrogen nutrient limitation, a large portion of the reduced carbon is stored in PHB (C4H6O2)n, accounting for 50-60% of dry cell mass. PHB is a biodegradable thermoplastic that can find a variety of environmentally friendly applications. It is also a platform material from which small chemicals can be derived. At a high temperature (240 - 290 oC), the biopolyester is degraded into crotonic acid (C4H6O2). On a solid phosphoric acid catalyst, PHB is deoxygenated via decarboxylation into a hydrocarbon oil (C6-C18) at 240 oC or so. Aromatics and alkenes are the major compounds, depending on the reaction conditions. A gasoline-grade liquid fuel (77 wt% oil) and a biodiesel-grade fuel (23 wt% oil) were obtained from the hydrocarbon oil via distillation. The formation routes of hydrocarbon oil from crotonic acid, the major PHB degradation intermediate, are revealed and discussed. This work shows a novel green process from which biodegradable plastics and high-grade liquid fuels can be directly produced from carbon dioxide, water and solar power. The productivity of the green polyester (5.3 g L-1 d-1) is much higher than that of microalgal oil (0.13 g L-1 d-1). Other technical merits of the new green process may include continuous operation under intermittent solar irradiation and convenient scale up in outdoor.Keywords: bioplastics, carbon dioxide fixation, drop-in liquid fuels, green process
Procedia PDF Downloads 189213 Non-Cytotoxic Natural Sourced Inorganic Hydroxyapatite (HAp) Scaffold Facilitate Bone-like Mechanical Support and Cell Proliferation
Authors: Sudip Mondal, Biswanath Mondal, Sudit S. Mukhopadhyay, Apurba Dey
Abstract:
Bioactive materials improve devices for a long lifespan but have mechanical limitations. Mechanical characterization is one of the very important characteristics to evaluate the life span and functionality of the scaffold material. After implantation of scaffold material the primary stage rejection of scaffold occurs due to non biocompatible effect of host body system. The second major problems occur due to the effect of mechanical failure. The mechanical and biocompatibility failure of the scaffold materials can be overcome by the prior evaluation of the scaffold materials. In this study chemically treated Labeo rohita scale is used for synthesizing hydroxyapatite (HAp) biomaterial. Thermo-gravimetric and differential thermal analysis (TG-DTA) is carried out to ensure thermal stability. The chemical composition and bond structures of wet ball-milled calcined HAp powder is characterized by Fourier Transform Infrared spectroscopy (FTIR), X-ray Diffraction (XRD), Field Emission Scanning Electron Microscopy (FE-SEM), Transmission Electron Microscopy (TEM), Energy Dispersive X-ray (EDX) analysis. Fish scale derived apatite materials consists of nano-sized particles with Ca/P ratio of 1.71. The biocompatibility through cytotoxicity evaluation and MTT assay are carried out in MG63 osteoblast cell lines. In the cell attachment study, the cells are tightly attached with HAp scaffolds developed in the laboratory. The result clearly suggests that HAp material synthesized in this study do not have any cytotoxic effect, as well as it has a natural binding affinity for mammalian cell lines. The synthesized HAp powder further successfully used to develop porous scaffold material with suitable mechanical property of ~0.8GPa compressive stress, ~1.10 GPa a hardness and ~ 30-35% porosity which is acceptable for implantation in trauma region for animal model. The histological analysis also supports the bio-affinity of processed HAp biomaterials in Wistar rat model for investigating the contact reaction and stability at the artificial or natural prosthesis interface for biomedical function. This study suggests the natural sourced fish scale-derived HAp material could be used as a suitable alternative biomaterial for tissue engineering application in near future.Keywords: biomaterials, hydroxyapatite, scaffold, mechanical property, tissue engineering
Procedia PDF Downloads 455212 Profile of the Renal Failure Patients under Haemodialysis at B. P. Koirala Institute of Health Sciences Nepal
Authors: Ram Sharan Mehta, Sanjeev Sharma
Abstract:
Introduction: Haemodialysis (HD) is a mechanical process of removing waste products from the blood and replacing essential substances in patients with renal failure. First artificial kidney developed in Netherlands in 1943 AD First successful treatment of CRF reported in 1960AD, life-saving treatment begins for CRF in 1972 AD. In 1973 AD Medicare took over financial responsibility for many clients and after that method become popular. BP Koirala institute of health science is the only center outside the Kathmandu, where HD service is available. In BPKIHS PD started in Jan.1998, HD started in August 2002 till September 2003 about 278 patients received HD. Day by day the number of HD patients is increasing in BPKIHS as with institutional growth. No such type of study was conducted in past hence there is lack of valid & reliable baseline data. Hence, the investigators were interested to conduct the study on " Profile of the Renal Failure patients under Haemodialysis at B.P. Koirala Institute of Health Sciences Nepal". Objectives: The objectives of the study were: to find out the Socio-demographic characteristics of the patients, to explore the knowledge of the patients regarding disease process and Haemodialysis and to identify the problems encountered by the patients. Methods: It is a hospital-based exploratory study. The population of the study was the clients under HD and the sampling method was purposive. Fifty-four patients undergone HD during the period of 17 July 2012 to 16 July 2013 of complete one year were included in the study. Structured interview schedule was used for collect data after obtaining validity and reliability. Results: Total 54 subjects had undergone for HD, having age range of 5-75 years and majority of them were male (74%) and Hindu (93 %). Thirty-one percent illiterate, 28% had agriculture their occupation, 80% of them were from very poor community, and about 30% subjects were unaware about the disease they suffering. Majority of subjects reported that they had no complications during dialysis (61%), where as 20% reported nausea and vomiting, 9% Hypotension, 4% headache and 2%chest pain during dialysis. Conclusions: CRF leading to HD is a long battle for patients, required to make major and continuous adjustment, both physiologically and psychologically. The study suggests that non-compliance with HD regimen were common. The socio-demographic and knowledge profile will help in the management and early prevention of disease and evaluate aspects that will influence care and patients can select mode of treatment themselves properly.Keywords: profile, haemodialysis, Nepal, patients, treatment
Procedia PDF Downloads 375211 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates
Authors: Takashi Mitsuishi
Abstract:
Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation
Procedia PDF Downloads 363210 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 77209 Perception of Eco-Music From the Contents the Earth’s Sound Ecosystem
Authors: Joni Asitashvili, Eka Chabashvili, Maya Virsaladze, Alexander Chokhonelidze
Abstract:
Studying the soundscape is a major challenge in many countries of the civilized world today. The sound environment and music itself are part of the Earth's ecosystem. Therefore, researching its positive or negative impact is important for a clean and healthy environment. The acoustics of nature gave people many musical ideas, and people enriched musical features and performance skills with the ability to imitate the surrounding sound. For example, a population surrounded by mountains invented the technique of antiphonal singing, which mimics the effect of an echo. Canadian composer Raymond Murray Schafer viewed the world as a kind of musical instrument with ever-renewing tuning. He coined the term "Soundscape" as a name of a natural environmental sound, including the sound field of the Earth. It can be said that from which the “music of nature” is constructed. In the 21st century, a new field–Ecomusicology–has emerged in the field of musical art to study the sound ecosystem and various issues related to it. Ecomusicology considers the interconnections between music, culture, and nature–According to the Aaron Allen. Eco-music is a field of ecomusicology concerning with the depiction and realization of practical processes using modern composition techniques. Finding an artificial sound source (instrumental or electronic) for the piece that will blend into the soundscape of Sound Oases. Creating a composition, which sounds in harmony with the vibrations of human, nature, environment, and micro- macrocosm as a whole; Currently, we are exploring the ambient sound of the Georgian urban and suburban environment to discover “Sound Oases" and compose Eco-music works. We called “Sound Oases" an environment with a specific sound of the ecosystem to use in the musical piece as an instrument. The most interesting examples of Eco-music are the round dances, which were already created in the BC era. In round dances people would feel the united energy. This urge to get united revealed itself in our age too, manifesting itself in a variety of social media. The virtual world, however, is not enough for a healthy interaction; we created plan of “contemporary round dance” in sound oasis, found during expedition in Georgian caves, where people interacted with cave's soundscape and eco-music, they feel each other sharing energy and listen to earth sound. This project could be considered a contemporary round dance, a long improvisation, particular type of art therapy, where everyone can participate in an artistic process. We would like to present research result of our eco-music experimental performance.Keywords: eco-music, environment, sound, oasis
Procedia PDF Downloads 61208 Soils Properties of Alfisols in the Nicoya Peninsula, Guanacaste, Costa Rica
Authors: Elena Listo, Miguel Marchamalo
Abstract:
This research studies the soil properties located in the watershed of Jabillo River in the Guanacaste province, Costa Rica. The soils are classified as Alfisols (T. Haplustalfs), in the flatter parts with grazing as Fluventic Haplustalfs or as a consequence of bad drainage as F. Epiaqualfs. The objective of this project is to define the status of the soil, to use remote sensing as a tool for analyzing the evolution of land use and determining the water balance of the watershed in order to improve the efficiency of the water collecting systems. Soil samples were analyzed from trial pits taken from secondary forests, degraded pastures, mature teak plantation, and regrowth -Tectona grandis L. F.- species developed favorably in the area. Furthermore, to complete the study, infiltration measurements were taken with an artificial rainfall simulator, as well as studies of soil compaction with a penetrometer, in points strategically selected from the different land uses. Regarding remote sensing, nearly 40 data samples were collected per plot of land. The source of radiation is reflected sunlight from the beam and the underside of leaves, bare soil, streams, roads and logs, and soil samples. Infiltration reached high levels. The majority of data came from the secondary forest and mature planting due to a high proportion of organic matter, relatively low bulk density, and high hydraulic conductivity. Teak regrowth had a low rate of infiltration because the studies made regarding the soil compaction showed a partial compaction over 50 cm. The secondary forest presented a compaction layer from 15 cm to 30 cm deep, and the degraded pasture, as a result of grazing, in the first 15 cm. In this area, the alfisols soils have high content of iron oxides, a fact that causes a higher reflectivity close to the infrared region of the electromagnetic spectrum (around 700mm), as a result of clay texture. Specifically in the teak plantation where the reflectivity reaches values of 90 %, this is due to the high content of clay in relation to others. In conclusion, the protective function of secondary forests is reaffirmed with regards to erosion and high rate of infiltration. In humid climates and permeable soils, the decrease of runoff is less, however, the percolation increases. The remote sensing indicates that being clay soils, they retain moisture in a better way and it means a low reflectivity despite being fine texture.Keywords: alfisols, Costa Rica, infiltration, remote sensing
Procedia PDF Downloads 694207 The Impact of Artificial Intelligence on Children with Autism
Authors: Rania Melad Kamel Hakim
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. These have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these ‘syndrome’ forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or ‘non-syndrome’ autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way to improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co-morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties (‘sticky attention’), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 54206 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong
Abstract:
This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 218205 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 435204 The Impact of Artificial Intelligence on Autism Attitudes
Authors: Sara Asham Mahrous Kamel
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 54203 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 99202 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 72201 Analysis and Modeling of Graphene-Based Percolative Strain Sensor
Authors: Heming Yao
Abstract:
Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.Keywords: graphene, gauge factor, percolative transport, strain sensor
Procedia PDF Downloads 416200 The Impact of Technology and Artificial Intelligence on Children in Autism
Authors: Dina Moheb Rashid Michael
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 55199 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles
Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.Keywords: mobile mapping, GNSS, IMU, similarity, classification
Procedia PDF Downloads 84198 Surrogacy in India: Emerging Business or Disguised Human Trafficking
Authors: Priya Sepaha
Abstract:
Commercial Surrogacy refers to a contract in which a woman carries a pregnancy for intended parents. There are two types of surrogacy; first, Traditional Surrogacy, in which, sperm of the donor or father is artificially inseminated in the women and carries the fetus till birth. Second, Gestational Surrogacy, in which the egg and sperm of the intended parent are collected for artificial fertilization through In Vitro Fertilization (IVF) technique and after the embryo formation, it is transferred into the womb of a surrogate mother with the help of Assisted Reproductive Technique. Surrogacy has become so widespread in India that it has now been nicknamed the "rent-a-womb" capital of the world due to relatively low cost and lack of stringent regulatory legalisation. The legal aspects surrounding surrogacy are complex, diverse and mostly unsettled. Although this appears to be beneficial for the parties concerned, there are certain sensitive issues which need to be addressed to ensure ample protection to all stakeholders. Commercial surrogacy is an emerging business and a new means of human trafficking particularly in India. Poor and illiterate women are often lured in such deals by their spouse or broker for earning easy money. Traffickers also use force, fraud, or coercion at times to intimidate the probable surrogate mothers. A major chunk of money received from covert surrogacy agreement is taken away by the brokers. The Law Commission of India has specifically reviewed the issue as India is emerging as a major global surrogacy destination. The Supreme Court of India held in the Manji's case in 2008, that commercial surrogacy can be permitted with certain restrictions but had directed the Legislature to pass an appropriate Law for governing Surrogacy in India. The draft Assisted Reproductive Technique (ART) Bill, 2010 is still pending for approval. At present, the Surrogacy Contract between the parties and the ART Clinics Guidelines are perhaps the only guiding force. The Immoral Trafficking Prevention Act (ITPA), 1956 and Sections 366(A) and 372 of the Indian Penal Code, 1860 are perhaps the only existing laws, which deal with human trafficking. Yet, none of these provisions specifically deal with the serious issue of trafficking for the purpose of Commercial Surrogacy. India remains one of the few countries that still allow commercial surrogacy. International Surrogacy involves bilateral issues, where the laws of both the nations have to be at par in order to ensure that the concerns and interests of parties involved get amicably resolved. There is urgent need to pass a comprehensive law by incorporating the latest developments in this field in order to make it ethical on the one hand and to curb disguised human trafficking on the other.Keywords: business, human trafficking, legal, surrogacy
Procedia PDF Downloads 343197 The Impact of Artificial Intelligence on Autism Attitudes and Laws
Authors: Randa Reda Luke Waheeb
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 55