Search results for: dimensional accuracy
1095 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools
Authors: Zhou Congling
Abstract:
The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity
Procedia PDF Downloads 561094 Clinical and Radiological Features of Adenomyosis and Its Histopathological Correlation
Authors: Surabhi Agrawal Kohli, Sunita Gupta, Esha Khanuja, Parul Garg, P. Gupta
Abstract:
Background: Adenomyosis is a common gynaecological condition that affects the menstruating women. Uterine enlargement, dysmenorrhoea, and menorrhagia are regarded as the cardinal clinical symptoms of adenomyosis. Classically it was thought, compared with ultrasonography, when adenomyosis is suspected, MRI enables more accurate diagnosis of the disease. Materials and Methods: 172 subjects were enrolled after an informed consent that had complaints of HMB, dyspareunia, dysmenorrhea, and chronic pelvic pain. Detailed history of the enrolled subjects was taken, followed by a clinical examination. These patients were then subjected to TVS where myometrial echo texture, presence of myometrial cysts, blurring of endomyometrial junction was noted. MRI was followed which noted the presence of junctional zone thickness and myometrial cysts. After hysterectomy, histopathological diagnosis was obtained. Results: 78 participants were analysed. The mean age was 44.2 years. 43.5% had parity of 4 or more. heavy menstrual bleeding (HMB) was present in 97.8% and dysmenorrhea in 93.48 % of HPE positive patient. Transvaginal sonography (TVS) and MRI had a sensitivity of 89.13% and 80.43%, specificity of 90.62% and 84.37%, positive likelihood ratio of 9.51 and 5.15, negative likelihood ratio of 0.12 and 0.23, positive predictive value of 93.18% and 88.1%, negative predictive value of 85.29% and 75% and a diagnostic accuracy of 89.74% and 82.5%. Comparison of sensitivity (p=0.289) and specificity (p=0.625) showed no statistically significant difference between TVS and MRI. Conclusion: Prevalence of 30.23%. HMB with dysmenorrhoea and chronic pelvic pain helps in diagnosis. TVS (Endomyometrial junction blurring) is both sensitive and specific in diagnosing adenomyosis without need for additional diagnostic tool. Both TVS and MRI are equally efficient, however because of certain additional advantages of TVS over MRI, it may be used as the first choice of imaging. MRI may be used additionally in difficult cases as well as in patients with existing co-pathologies.Keywords: adenomyosis, heavy menstrual bleeding, MRI, TVS
Procedia PDF Downloads 4981093 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 431092 E-Learning Recommender System Based on Collaborative Filtering and Ontology
Authors: John Tarus, Zhendong Niu, Bakhti Khadidja
Abstract:
In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.Keywords: collaborative filtering, e-learning, ontology, recommender system
Procedia PDF Downloads 3861091 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 1111090 Molecular Dynamics Simulation of Irradiation-Induced Damage Cascades in Graphite
Authors: Rong Li, Brian D. Wirth, Bing Liu
Abstract:
Graphite is the matrix, and structural material in the high temperature gas-cooled reactor exhibits an irradiation response. It is of significant importance to analyze the defect production and evaluate the role of graphite under irradiation. A vast experimental literature exists for graphite on the dimensional change, mechanical properties, and thermal behavior. However, simulations have not been applied to the atomistic perspective. Remarkably few molecular dynamics simulations have been performed to study the irradiation response in graphite. In this paper, irradiation-induced damage cascades in graphite were investigated with molecular dynamics simulation. Statistical results of the graphite defects were obtained by sampling a wide energy range (1–30 KeV) and 10 different runs for every cascade simulation with different random number generator seeds to the velocity scaling thermostat function. The chemical bonding in carbon was described using the adaptive intermolecular reactive empirical bond-order potential (AIREBO) potential coupled with the standard Ziegler–Biersack–Littmack (ZBL) potential to describe close-range pair interactions. This study focused on analyzing the number of defects, the final cascade morphology and the distribution of defect clusters in space, the length-scale cascade properties such as the cascade length and the range of primary knock-on atom (PKA), and graphite mechanical properties’ variation. It can be concluded that the number of surviving Frenkel pairs increased remarkably with the increasing initial PKA energy but did not exhibit a thermal spike at slightly lower energies in this paper. The PKA range and cascade length approximately linearly with energy which indicated that increasing the PKA initial energy will come at expensive computation cost such as 30KeV in this study. The cascade morphology and the distribution of defect clusters in space mainly related to the PKA energy meanwhile the temperature effect was relatively negligible. The simulations are in agreement with known experimental results and the Kinchin-Pease model, which can help to understand the graphite damage cascades and lifetime span under irradiation and provide a direction to the designs of these kinds of structural materials in the future reactors.Keywords: graphite damage cascade, molecular dynamics, cascade morphology, cascade distribution
Procedia PDF Downloads 1551089 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data
Authors: Arjun G. Koppad
Abstract:
The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.Keywords: forest, biomass, LULC, back scatter, SAR, regression
Procedia PDF Downloads 281088 Unlocking the Genetic Code: Exploring the Potential of DNA Barcoding for Biodiversity Assessment
Authors: Mohammed Ahmed Ahmed Odah
Abstract:
DNA barcoding is a crucial method for assessing and monitoring species diversity amidst escalating threats to global biodiversity. The author explores DNA barcoding's potential as a robust and reliable tool for biodiversity assessment. It begins with a comprehensive review of existing literature, delving into the theoretical foundations, methodologies and applications of DNA barcoding. The suitability of various DNA regions, like the COI gene, as universal barcodes is extensively investigated. Additionally, the advantages and limitations of different DNA sequencing technologies and bioinformatics tools are evaluated within the context of DNA barcoding. To evaluate the efficacy of DNA barcoding, diverse ecosystems, including terrestrial, freshwater and marine habitats, are sampled. Extracted DNA from collected specimens undergoes amplification and sequencing of the target barcode region. Comparison of the obtained DNA sequences with reference databases allows for the identification and classification of the sampled organisms. Findings demonstrate that DNA barcoding accurately identifies species, even in cases where morphological identification proves challenging. Moreover, it sheds light on cryptic and endangered species, aiding conservation efforts. The author also investigates patterns of genetic diversity and evolutionary relationships among different taxa through the analysis of genetic data. This research contributes to the growing knowledge of DNA barcoding and its applicability for biodiversity assessment. The advantages of this approach, such as speed, accuracy and cost-effectiveness, are highlighted, along with areas for improvement. By unlocking the genetic code, DNA barcoding enhances our understanding of biodiversity, supports conservation initiatives and informs evidence-based decision-making for the sustainable management of ecosystems.Keywords: DNA barcoding, biodiversity assessment, genetic code, species identification, taxonomic resolution, next-generation sequencing
Procedia PDF Downloads 271087 2-Dimensional Transition Metal Dichalcogenides for Photodetection and Biosensing Endoscopies After a 5-Year Follow-Up on Central Venous Access Receiving Home (HPN) Patients with Prophylaxis at Tertiary Healthcare Facility
Authors: Michelle Themalil, Celia Bueno, Rulla Al- Araji
Abstract:
Objective and Study: There are no established guidelines for antibiotic prophylaxis in children with central venous catheters (CVCs) on home parenteral nutrition (HPN), leading to varying practices across UK Centres. We hypothesize that children with intestinal failure are at increased risk for bacteraemia due to altered anatomy, dysmotility, inflammation, biofilm formation in long-term CVCs, and the use of central lines during procedures. Given the bacteraemia rates of up to 8% in upper and 25% in lower endoscopy for adults without central lines, we argue that prophylactic antibiotics are reasonable, given the increased risks faced by this high-risk group of children. Methods: We conducted a five-year review of patients with central venous access receiving home parenteral nutrition (HPN) who underwent endoscopies with antibiotic prophylaxis at our center (tertiary). We documented and analyzed post-procedure infections and their associated risk factors. Results: A total of 15 patients on HPN underwent 29 endoscopic procedures, including 4 upper, 9 combined upper and lower, and 16 combined upper, lower, and ileoscopy. Confirmed infection rates remained at 0% up to 28 days post-procedure. The agreed-upon prophylaxis regimen was implemented, with ciprofloxacin and metronidazole administered as the primary antibiotics. Notably, only 51.7% of patients received a peripheral cannula despite recommendations to avoid central line use during anesthesia, and 20.6% had small intestinal bacterial overgrowth. Conclusions: This study is the first to investigate post-endoscopy infection rates in pediatric patients on HPN. Despite a small sample size, we observed a 0% infection rate, significantly lower than reported rates in adults. These findings suggest that further research is warranted to explore the implications of antibiotic prophylaxis in this unique patient cohort and to establish guidelines that may enhance patient safety during endoscopic procedures.Keywords: post endosopy infections, central venous access, home parenteral nutrition, intestinal failure
Procedia PDF Downloads 141086 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 2281085 Thermal Image Segmentation Method for Stratification of Freezing Temperatures
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image
Procedia PDF Downloads 3221084 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 1831083 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method
Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati
Abstract:
Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow
Procedia PDF Downloads 3161082 Examination of Forged Signatures Printed by Means of Fabrication in Terms of Their Relation to the Perpetrator
Authors: Salim Yaren, Nergis Canturk
Abstract:
Signatures are signs that are handwritten by person in order to confirm values such as information, amount, meaning, time and undertaking that bear on a document. It is understood that the signature of a document and the accuracy of the information on the signature is accepted and approved. Forged signatures are formed by forger without knowing and seeing original signature of person that forger will imitate and as a result of his/her effort for hiding typical characteristics of his/her own signatures. Forged signatures are often signed by starting with the initials of the first and last name or persons of the persons whose fake signature will be signed. The similarities in the signatures are completely random. Within the scope of the study, forged signatures are collected from 100 people both their original signatures and forged signatures signed referring to 5 imaginary people. These signatures are compared for 14 signature analyzing criteria by 2 signature analyzing experts except the researcher. 1 numbered analyzing expert who is 9 year experience in his/her field evaluated signatures of 39 (39%) people right and of 25 (25%) people wrong and he /she made any evaluations for signatures of 36 (36%) people. 2 numbered analyzing expert who is 16 year experienced in his/her field evaluated signatures of 49 (49%) people right and 28 (28%) people wrong and he /she made any evaluations for signatures of 23 (23%) people. Forged signatures that are signed by 24 (24%) people are matched by two analyzing experts properly, forged signatures that are signed by 8 (8%) people are matched wrongfully and made up signatures that are signed by 12 (12%) people couldn't be decided by both analyzing experts. Signatures analyzing is a subjective topic so that analyzing and comparisons take form according to education, knowledge and experience of the expert. Consequently, due to the fact that 39% success is achieved by analyzing expert who has 9 year professional experience and 49% success is achieved by analyzing expert who has 16 year professional experience, it is seen that success rate is directly proportionate to knowledge and experience of the expert.Keywords: forensic signature, forensic signature analysis, signature analysis criteria, forged signature
Procedia PDF Downloads 1241081 Electronic Payment Recording with Payment History Retrieval Module: A System Software
Authors: Adrian Forca, Simeon Cainday III
Abstract:
The Electronic Payment Recording with Payment History Retrieval Module is developed intendedly for the College of Science and Technology. This system software innovates the manual process of recording the payments done in the department through the development of electronic payment recording system software shifting from the slow and time-consuming procedure to quick yet reliable and accurate way of recording payments because it immediately generates receipts for every transaction. As an added feature to its software process, generation of recorded payment report is integrated eliminating the manual reporting to a more easy and consolidated report. As an added feature to the system, all recorded payments of the students can be retrieved immediately making the system transparent and reliable payment recording software. Viewing the whole process, the system software will shift from the manual process to an organized software technology because the information will be stored in a logically correct and normalized database. Further, the software will be developed using the modern programming language and implement strict programming methods to validate all users accessing the system, evaluate all data passed into the system and information retrieved to ensure data accuracy and reliability. In addition, the system will identify the user and limit its access privilege to establish boundaries of the specific access to information allowed for the store, modify, and update making the information secure against unauthorized data manipulation. As a result, the System software will eliminate the manual procedure and replace with an innovative modern information technology resulting to the improvement of the whole process of payment recording fast, secure, accurate and reliable software innovations.Keywords: collection, information system, manual procedure, payment
Procedia PDF Downloads 1681080 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 981079 Architectural Adaptation for Road Humps Detection in Adverse Light Scenario
Authors: Padmini S. Navalgund, Manasi Naik, Ujwala Patil
Abstract:
Road hump is a semi-cylindrical elevation on the road made across specific locations of the road. The vehicle needs to maneuver the hump by reducing the speed to avoid car damage and pass over the road hump safely. Road Humps on road surfaces, if identified in advance, help to maintain the security and stability of vehicles, especially in adverse visibility conditions, viz. night scenarios. We have proposed a deep learning architecture adaptation by implementing the MISH activation function and developing a new classification loss function called "Effective Focal Loss" for Indian road humps detection in adverse light scenarios. We captured images comprising of marked and unmarked road humps from two different types of cameras across South India to build a heterogeneous dataset. A heterogeneous dataset enabled the algorithm to train and improve the accuracy of detection. The images were pre-processed, annotated for two classes viz, marked hump and unmarked hump. The dataset from these images was used to train the single-stage object detection algorithm. We utilised an algorithm to synthetically generate reduced visible road humps scenarios. We observed that our proposed framework effectively detected the marked and unmarked hump in the images in clear and ad-verse light environments. This architectural adaptation sets up an option for early detection of Indian road humps in reduced visibility conditions, thereby enhancing the autonomous driving technology to handle a wider range of real-world scenarios.Keywords: Indian road hump, reduced visibility condition, low light condition, adverse light condition, marked hump, unmarked hump, YOLOv9
Procedia PDF Downloads 281078 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 871077 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations
Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan
Abstract:
The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations
Procedia PDF Downloads 2461076 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test
Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati
Abstract:
Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.Keywords: validation, HPLC, plasma, bioequivalence
Procedia PDF Downloads 2911075 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils
Authors: Reza Ziaie Moayed, Saeideh Mohammadi
Abstract:
Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile
Procedia PDF Downloads 1281074 Effect of Damper Combinations in Series or Parallel on Structural Response
Authors: Ajay Kumar Sinha, Sharad Singh, Anukriti Sinha
Abstract:
Passive energy dissipation method for earthquake protection of structures is undergoing developments for improved performance. Combined use of different types of damping mechanisms has shown positive results in the near past. Different supplemental damping methods like viscous damping, frictional damping and metallic damping are being combined together for optimum performance. The conventional method of connecting passive dampers to structures is a parallel connection between the damper unit and structural member. Researchers are investigating coupling effect of different types of dampers. The most popular choice among the research community is coupling of viscous dampers and frictional dampers. The series and parallel coupling of these damping units are being studied for relative performance of the coupled system on response control of structures against earthquake. In this paper an attempt has been made to couple Fluid Viscous Dampers and Frictional Dampers in series and parallel to form a single unit of damping system. The relative performance of the coupled units has been studied on three dimensional reinforced concrete framed structure. The current theories of structural dynamics in practice for viscous damping and frictional damping have been incorporated in this study. The time history analysis of the structural system with coupled damper units, uncoupled damper units as well as of structural system without any supplemental damping has been performed in this study. The investigations reported in this study show significant improved performance of coupled system. A higher natural frequency of the system outside the forcing frequency has been obtained for structural systems with coupled damper units as against the other cases. The structural response of the structure in terms of storey displacement and storey drift show significant improvement for the case with coupled damper units as against the cases with uncoupled units or without any supplemental damping. The results are promising in terms of improved response of the structure with coupled damper units. Further investigations in this regard for a comparative performance of the series and parallel coupled systems will be carried out to study the optimum behavior of these coupled systems for enhanced response control of structural systems.Keywords: frictional damping, parallel coupling, response control, series coupling, supplemental damping, viscous damping
Procedia PDF Downloads 4581073 Artificial Habitat Mapping in Adriatic Sea
Authors: Annalisa Gaetani, Anna Nora Tassetti, Gianna Fabi
Abstract:
The hydroacoustic technology is an efficient tool to study the sea environment: the most recent advancement in artificial habitat mapping involves acoustic systems to investigate fish abundance, distribution and behavior in specific areas. Along with a detailed high-coverage bathymetric mapping of the seabed, the high-frequency Multibeam Echosounder (MBES) offers the potential of detecting fine-scale distribution of fish aggregation, combining its ability to detect at the same time the seafloor and the water column. Surveying fish schools distribution around artificial structures, MBES allows to evaluate how their presence modifies the biological natural habitat overtime in terms of fish attraction and abundance. In the last years, artificial habitat mapping experiences have been carried out by CNR-ISMAR in the Adriatic sea: fish assemblages aggregating at offshore gas platforms and artificial reefs have been systematically monitored employing different kinds of methodologies. This work focuses on two case studies: a gas extraction platform founded at 80 meters of depth in the central Adriatic sea, 30 miles far from the coast of Ancona, and the concrete and steel artificial reef of Senigallia, deployed by CNR-ISMAR about 1.2 miles offshore at a depth of 11.2 m . Relating the MBES data (metrical dimensions of fish assemblages, shape, depth, density etc.) with the results coming from other methodologies, such as experimental fishing surveys and underwater video camera, it has been possible to investigate the biological assemblage attracted by artificial structures hypothesizing which species populate the investigated area and their spatial dislocation from these artificial structures. Processing MBES bathymetric and water column data, 3D virtual scenes of the artificial habitats have been created, receiving an intuitive-looking depiction of their state and allowing overtime to evaluate their change in terms of dimensional characteristics and depth fish schools’ disposition. These MBES surveys play a leading part in the general multi-year programs carried out by CNR-ISMAR with the aim to assess potential biological changes linked to human activities on.Keywords: artificial habitat mapping, fish assemblages, hydroacustic technology, multibeam echosounder
Procedia PDF Downloads 2601072 Evaluating the Implementation of a Quality Management System in the COVID-19 Diagnostic Laboratory of a Tertiary Care Hospital in Delhi
Authors: Sukriti Sabharwal, Sonali Bhattar, Shikhar Saxena
Abstract:
Introduction: COVID-19 molecular diagnostic laboratory is the cornerstone of the COVID-19 disease diagnosis as the patient’s treatment and management protocol depend on the molecular results. For this purpose, it is extremely important that the laboratory conducting these results adheres to the quality management processes to increase the accuracy and validity of the reports generated. We started our own molecular diagnostic setup at the onset of the pandemic. Therefore, we conducted this study to generate our quality management data to help us in improving on our weak points. Materials and Methods: A total of 14561 samples were evaluated by the retrospective observational method. The quality variables analysed were classified into pre-analytical, analytical, and post-analytical variables, and the results were presented in percentages. Results: Among the pre-analytical variables, sample leaking was the most common cause of the rejection of samples (134/14561, 0.92%), followed by non-generation of SRF ID (76/14561, 0.52%) and non-compliance to triple packaging (44/14561, 0.3%). The other pre-analytical aspects assessed were incomplete patient identification (17/14561, 0.11%), insufficient quantity of samples (12/14561, 0.08%), missing forms/samples (7/14561, 0.04%), samples in the wrong vials/empty VTM tubes (5/14561, 0.03%) and LIMS entry not done (2/14561, 0.01%). We are unable to obtain internal quality control in 0.37% of samples (55/14561). We also experienced two incidences of cross-contamination among the samples resulting in false-positive results. Among the post-analytical factors, a total of 0.07% of samples (11/14561) could not be dispatched within the stipulated time frame. Conclusion: Adherence to quality control processes is foremost for the smooth running of any diagnostic laboratory, especially the ones involved in critical reporting. Not only do the indicators help in keeping in check the laboratory parameters but they also allow comparison with other laboratories.Keywords: laboratory quality management, COVID-19, molecular diagnostics, healthcare
Procedia PDF Downloads 1651071 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 1241070 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 2811069 Low-Cost Image Processing System for Evaluating Pavement Surface Distress
Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa
Abstract:
Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means
Procedia PDF Downloads 1821068 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models
Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana
Abstract:
The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.Keywords: electricity demand forecasting, load shedding, demand side management, data science
Procedia PDF Downloads 621067 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism
Authors: Gobinathan Devathasan, Kezia Devathasan
Abstract:
Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech
Procedia PDF Downloads 1951066 A Bottom-Up Approach for the Synthesis of Highly Ordered Fullerene-Intercalated Graphene Hybrids
Authors: A. Kouloumpis, P. Zygouri, G. Potsi, K. Spyrou, D. Gournis
Abstract:
Much of the research effort on graphene focuses on its use as building block for the development of new hybrid nanostructures with well-defined dimensions and behavior suitable for applications among else in gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biology. Towards this aim, here we describe a new bottom-up approach, which combines the self-assembly with the Langmuir Schaefer technique, for the production of fullerene-intercalated graphene hybrid materials. This new method uses graphene nanosheets as a template for the grafting of various fullerene C60 molecules (pure C60, bromo-fullerenes, C60Br24, and fullerols, C60(OH)24) in a bi-dimensional array, and allows for perfect layer-by-layer growth with control at the molecular level. Our film preparation approach involves a bottom-up layer-by-layer process that includes the formation of a hybrid organo-graphene Langmuir film hosting fullerene molecules within its interlayer spacing. A dilute water solution of chemically oxidized graphene (GO) was used as subphase on the Langmuir-Blodgett deposition system while an appropriate amino surfactant (that binds covalently with the GO) was applied for the formation of hybridized organo-GO. After the horizontal lift of a hydrophobic substrate, a surface modification of the GO platelets was performed by bringing the surface of the transferred Langmuir film in contact with a second amino surfactant solution (capable to interact strongly with the fullerene derivatives). In the final step, the hybrid organo-graphene film was lowered in the solution of the appropriate fullerene derivative. Multilayer films were constructed by repeating this procedure. Hybrid fullerene-based thin films deposited on various hydrophobic substrates were characterized by X-ray diffraction (XRD) and X-ray reflectivity (XRR), FTIR, and Raman spectroscopies, Atomic Force Microscopy, and optical measurements. Acknowledgments. This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)‐Research Funding Program: THALES. Investing in knowledge society through the European Social Fund (no. 377285).Keywords: hybrids, graphene oxide, fullerenes, langmuir-blodgett, intercalated structures
Procedia PDF Downloads 327