Search results for: retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 320

Search results for: retrieval

80 The Effect of Visual Access to Greenspace and Urban Space on a False Memory Learning Task

Authors: Bryony Pound

Abstract:

This study investigated how views of green or urban space affect learning performance. It provides evidence of the value of visual access to greenspace in work and learning environments, and builds on the extensive research into the cognitive and learning-related benefits of access to green and natural spaces, particularly in learning environments. It demonstrates that benefits of visual access to natural spaces whilst learning can produce statistically significant faster responses than those facing urban views after only 5 minutes. The primary hypothesis of this research was that a greenspace view would improve short-term learning. Participants were randomly assigned to either a view of parkland or of urban buildings from the same room. They completed a psychological test of two stages. The first stage consisted of a presentation of words from eight different categories (four manmade and four natural). Following this a 2.5 minute break was given; participants were not prompted to look out of the window, but all were observed doing so. The second stage of the test involved a word recognition/false memory test of three types. Type 1 was presented words from each category; Type 2 was non-presented words from those same categories; and Type 3 was non-presented words from different categories. Participants were asked to respond with whether they thought they had seen the words before or not. Accuracy of responses and reaction times were recorded. The key finding was that reaction times for Type 2 words (highest difficulty) were significantly different between urban and green view conditions. Those with an urban view had slower reaction times for these words, so a view of greenspace resulted in better information retrieval for word and false memory recognition. Importantly, this difference was found after only 5 minutes of exposure to either view, during winter, and with a sample size of only 26. Greenspace views improve performance in a learning task. This provides a case for better visual access to greenspace in work and learning environments.

Keywords: benefits, greenspace, learning, restoration

Procedia PDF Downloads 102
79 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardized access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused minimal change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: separated instruments retrieval, dynamic navigation system, 3D video microscope, trephine burs, extractor

Procedia PDF Downloads 42
78 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array

Authors: S. S. Sree Sanker, K. N. Madhusoodanan

Abstract:

Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 µm, length 1600 µm and width 50 µm. The array period of the crystal is 500 µm. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 µm. Height and diameter of the pillar array are 160 µm and 100 µm respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.

Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves

Procedia PDF Downloads 252
77 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources

Procedia PDF Downloads 355
76 The Distribution, Productivity and Conservation of Camphor Tree, Dryobalanops Aromatica in West Coast of Sumatra, Indonesia

Authors: Aswandi Anas Husin, Cut Rizlani Kholibrina

Abstract:

Harvesting camphor resin has been carried out since the beginning of civilization on the west coast of Sumatra. Oil or crystals that containing borneol are harvested from the camphor tree (Dryobalanops aromatica). Non-timber forest products are utilized for the manufacture of fragrances, antiseptics, anti-inflammatory, analgesic as well as effective for the treatment of blocked arteries. Based on exploration on the west coast of Sumatra, these endemic tree species were found remaining growing in groups on small spots in the lowlands to the hills. Some populations are found at an altitude of 700 meters above sea level in Kadabuhan, Jongkong and Sultan Daulat in Subulussalam district, Singkohor and Lake Paris in Aceh Singkil district, and Sirandorung and Manduamas in the north of Barus, Central Tapanuli district. These multi-purpose tree species was also identified as being able to adapt to the Singkil Peat Swamp. The decline in tree population has a direct impact on reducing their productivity. Conventionally, the crystals are harvested by cutting and splitting the stem into wooden blocks. In this way about 1.5-2.5 kg of crystals are obtained with various qualities. Camphor retrieval can also be done by making a notch on a standing tree trunk and collecting liquid resin (ombil) that is removed from the injured resin channel. Twigs and leaves also contain borneol. The aromatic content in this section opens opportunities for the supply of borneol through the distillation process. Vegetative propagation technology is needed to overcome the limitations of available seeds. This breeding strategy for vulnerable species starts with gathering genetic material from various provenances which are then used to support the provision of basic populations, breeding populations, multiplication populations and production populations for extensive development of camphor tree plantations

Keywords: camphor, conservation, natural borneol, productivity, vulnerable species

Procedia PDF Downloads 89
75 A Fluid-Walled Microfluidic Device for Cell Migration Studies

Authors: Cyril Deroy, Agata Rumianek, David R. Greaves, Peter R. Cook, Edmond J. Walsh

Abstract:

Various microfluidic platforms have been developed in the past couple of decades offering experimental methods for the study of cell migration; however, their implementation in the laboratory has remained limited. Some reasons cited for the lack of uptake include the technical complexity of the devices, high failure rate associated with gas-bubbles, biocompatibility concerns with the use of polydimethylsiloxane (PDMS) and equipment/time/expertise requirements for operation and manufacture. As sample handling remains challenging due to the closed format of microfluidic devices, open microfluidic systems have been developed offering versatility and simplicity of use. Rather than confining fluids by solid walls, samples can be accessed directly over the open platform, by removing at least one of the solid boundaries, such as the cover. In this paper, a method for the fabrication of open fluid-walled microfluidic circuits for cell migration studies is introduced, where only materials commonly used by the life-science community are required; tissue culture dishes and cell media. The simplicity of the method, and ability to retrieve cells of interest are two key features of the method. Both passive and active flow-devices can be created in this way. To demonstrate the versatility of the method a cell migration assay is performed, which requires fabricating circuits for establishing chemical gradients, loading cells and incubating, creating chemical gradients, real time imaging of cell migration and finally retrieval of cells. The open architecture has high fidelity as it eliminates air bubble related failures and enables the precise control of gradients. The ability to fabricate custom microfluidic designs in minutes should make this method suitable for use in a wide range of cell migration studies.

Keywords: chemotaxis, fluid walls, gradient generation, open microfluidics

Procedia PDF Downloads 113
74 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 249
73 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 292
72 Microfluidic Chambers with Fluid Walls for Cell Biology

Authors: Cristian Soitu, Alexander Feuerborn, Cyril Deroy, Alfonso Castrejon-Pita, Peter R. Cook, Edmond J. Walsh

Abstract:

Microfluidics now stands as an academically mature technology after a quarter of a century research activities have delivered a vast array of proof of concepts for many biological workflows. However, translation to industry remains poor, with only a handful of notable exceptions – e.g. digital PCR, DNA sequencing – mainly because of biocompatibility issues, limited range of readouts supported or complex operation required. This technology exploits the domination of interfacial forces over gravitational ones at the microscale, replacing solid walls with fluid ones as building blocks for cell micro-environments. By employing only materials used by biologists for decades, the system is shown to be biocompatible, and easy to manufacture and operate. The method consists in displacing a continuous fluid layer into a pattern of isolated chambers overlaid with an immiscible liquid to prevent evaporation. The resulting fluid arrangements can be arrays of micro-chambers with rectangular footprint, which use the maximum surface area available, or structures with irregular patterns. Pliant, self-healing fluid walls confine volumes as small as 1 nl. Such fluidic structures can be reconfigured during the assays, giving the platform an unprecedented level of flexibility. Common workflows in cell biology are demonstrated – e.g. cell growth and retrieval, cloning, cryopreservation, fixation and immunolabeling, CRISPR-Cas9 gene editing, and proof-of-concept drug tests. This fluid-shaping technology is shown to have potential for high-throughput cell- and organism-based assays. The ability to make and reconfigure on-demand microfluidic circuits on standard Petri dishes should find many applications in biology, and yield more relevant phenotypic and genotypic responses when compared to standard microfluidic assays.

Keywords: fluid walls, micro-chambers, reconfigurable, freestyle

Procedia PDF Downloads 163
71 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 224
70 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 193
69 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 59
68 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 173
67 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases

Authors: Hsin Lee, Hsuan Lee

Abstract:

Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.

Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases

Procedia PDF Downloads 41
66 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 153
65 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 360
64 IoT Based Soil Moisture Monitoring System for Indoor Plants

Authors: Gul Rahim Rahimi

Abstract:

The IoT-based soil moisture monitoring system for indoor plants is designed to address the challenges of maintaining optimal moisture levels in soil for plant growth and health. The system utilizes sensor technology to collect real-time data on soil moisture levels, which is then processed and analyzed using machine learning algorithms. This allows for accurate and timely monitoring of soil moisture levels, ensuring plants receive the appropriate amount of water to thrive. The main objectives of the system are twofold: to keep plants fresh and healthy by preventing water deficiency and to provide users with comprehensive insights into the water content of the soil on a daily and hourly basis. By monitoring soil moisture levels, users can identify patterns and trends in water consumption, allowing for more informed decision-making regarding watering schedules and plant care. The scope of the system extends to the agriculture industry, where it can be utilized to minimize the efforts required by farmers to monitor soil moisture levels manually. By automating the process of soil moisture monitoring, farmers can optimize water usage, improve crop yields, and reduce the risk of plant diseases associated with over or under-watering. Key technologies employed in the system include the Capacitive Soil Moisture Sensor V1.2 for accurate soil moisture measurement, the Node MCU ESP8266-12E Board for data transmission and communication, and the Arduino framework for programming and development. Additionally, machine learning algorithms are utilized to analyze the collected data and provide actionable insights. Cloud storage is utilized to store and manage the data collected from multiple sensors, allowing for easy access and retrieval of information. Overall, the IoT-based soil moisture monitoring system offers a scalable and efficient solution for indoor plant care, with potential applications in agriculture and beyond. By harnessing the power of IoT and machine learning, the system empowers users to make informed decisions about plant watering, leading to healthier and more vibrant indoor environments.

Keywords: IoT-based, soil moisture monitoring, indoor plants, water management

Procedia PDF Downloads 14
63 Effects of Intracerebroventricular Injection of Ghrelin and Aerobic Exercise on Passive Avoidance Memory and Anxiety in Adult Male Wistar Rats

Authors: Mohaya Farzin, Parvin Babaei, Mohammad Rostampour

Abstract:

Ghrelin plays a considerable role in important neurological effects related to food intake and energy homeostasis. As was found, regular physical activity may make available significant improvements to cognitive functions in various behavioral situations. Anxiety is one of the main concerns of the modern world, affecting millions of individuals’ health. There are contradictory results regarding ghrelin's effects on anxiety-like behavior, and the plasma level of this peptide is increased during physical activity. Here we aimed to evaluate the coincident effects of exogenous ghrelin and aerobic exercise on anxiety-like behavior and passive avoidance memory in Wistar rats. Forty-five male Wistar rats (250 ± 20 g) were divided into 9 groups (n=5) and received intra-hippocampal injections of 3.0 nmol ghrelin and performed aerobic exercise training for 8 weeks. Control groups received the same volume of saline and diazepam as negative and positive control groups, respectively. Learning and memory were estimated using a shuttle box apparatus, and anxiety-like behavior was recorded by an elevated plus-maze test (EPM). Data were analyzed by ANOVA test, and p<0.05 was considered significant. Our findings showed that the combined effect of ghrelin and aerobic exercise improves the acquisition, consolidation, and retrieval of passive avoidance memory in Wistar rats. Furthermore, it is supposed that the ghrelin receiving group spent less time in open arms and fewer open arms entries compared with the control group (p<0.05). However, exercising Wistar rats spent more time in the open arm zone in comparison with the control group (p<0.05). The exercise + Ghrelin administration established reduced anxiety (p<0.05). The results of this study demonstrate that aerobic exercise contributes to an increase in the endogenous production of ghrelin, and physical activity alleviates anxiety-related behaviors induced by intra-hippocampal injection of ghrelin. In general, exercise and ghrelin can reduce anxiety and improve memory.

Keywords: anxiety, ghrelin, aerobic exercise, learning, passive avoidance memory

Procedia PDF Downloads 92
62 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 318
61 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model

Authors: Ghazal Faraj, Andras Micsik

Abstract:

The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.

Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment

Procedia PDF Downloads 114
60 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 26
59 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea

Authors: Arafan Traore, Teiji Watanabe

Abstract:

Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.

Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change

Procedia PDF Downloads 213
58 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 57
57 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 195
56 Traditional and Commercially Prepared Medicine: Factors That Affect Preferences among Elderly Adults in Indigenous Community

Authors: Rhaetian Bern D. Azaula

Abstract:

The Philippines' indigenous population, estimated to be 10%-20%, is protected by the Indigenous Peoples Rights Act (IPRA), passed in 1997. However, due to their isolation and limited access to basic services such as health education or needs for health assistance, the law's implementation remains a challenge. As traditional medicine continues to play a significant role in society as the prevention and treatment of some illnesses, it is still customary and widely used to use plants in both traditional and modern ways; however, commercially prepared drugs are progressively advanced as time goes by. Therefore, the purpose of this quantitative study is to investigate the indigenous community at Barangay Magsikap General Nakar, Quezon, and analyze the factors that affect the respondent’s preferences in an indigenous community and reasons for patronizing traditional and commercially prepared medicines and proposes updated health education strategies and instructional materials. Slovin's formula was utilized to reduce the total population representation, followed by stratified sampling for proportional allocation of respondents. The study selects respondents (1) from an Indigenous Community in Barangay Magsikap, General Nakar, Quezon, (2) aged 60 and above, and (3) who are willing to participate. The researcher utilized a checklist-based questionnaire with a Tagalog version, and a Likert Scale was utilized to assess the respondent's choices on selected items. The researcher obtained informed consent from the indigenous community's regional and local office, the chieftain of the tribe, and the respondents, ensuring confidentiality in the collection and retrieval of data. The study revealed that respondents aged 60-69, males with no formal education, are unemployed and have no income source. They prefer traditional medicines due to their affordability, availability, and cultural practices but lack safe preparation, dosages, and contraindications of used medicines. Commercially prepared medications are acknowledged, but respondents are not fully aware of proper administration instructions and dosage labels. Recommendations include disseminating approved herbal medicines and ensuring proper preparation, indications, and contraindications.

Keywords: traditional medicine, commercially prepared medicine, indigenous community, elderly adult

Procedia PDF Downloads 39
55 Quick Response Codes in Physio: A Simple Click to Long-Term Oxygen Therapy Education

Authors: K. W. Lee, C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, L. Y. Chan, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow

Abstract:

QR (Quick Response) Code is a matrix barcode. It enables users to open websites, photos and other information with mobile devices by just snapping the code. In usual Long Term Oxygen Therapy arrangement, piles of LTOT related information like leaflets from different oxygen service providers are given to patients to choose an appropriate plan according to their needs. If these printed materials are transformed into electronic format (QR Code), it would be more environmentally-friendly. More importantly, electronic materials including LTOT equipment operation and dyspnoea relieving techniques also empower patients in long-term disease management. The objective to this study is to investigate the effect of QR code in patient education on new LTOT users. This study was carried out in medical wards of North District Hospital. Adult patients and relatives who followed commands, were able to use smartphones with internet services and required LTOT arrangement on hospital discharge were recruited. In LTOT arrangement, apart from the usual LTOT education booklets which included patients’ personal information (e.g. oxygen titration and six-minute walk test results etc.), extra leaflets consisted of 1. QR codes of LTOT plans from different oxygen service providers, 2. Education materials of dyspnoea management and 3. Instructions on LTOT equipment operation were given. Upon completion of LTOT arrangement, a questionnaire about the use of QR code on patient education was filled in by patients or relatives. A total of 10 new LTOT users were recruited from November 2017 to January 2018. Initially, 70% of them did not know anything about the QR code, but all of them understood its operation after a simple demonstration. 70% of them agreed that it was convenient to use (20% strongly agree, 40% agree, 10% somewhat agree). 80% of them agreed that QR code could facilitate the retrieval of more LTOT related information (10% strongly agree, 70% agree) while 90% agreed that we should continue delivering QR code leaflets to new LTOT users in the future (30% strongly agree, 40% agree, 20% somewhat agree). It is proven that QR code is a convenient and environmentally-friendly tool to deliver information. It is also relatively easy to be introduced to new users. It has received welcoming feedbacks from current users.

Keywords: long-term oxygen therapy, physiotherapy, patient education, QR code

Procedia PDF Downloads 113
54 A Postcolonial Feminist Exploration of Zulu Girl Child’s Position and Gender Dynamics in Religio-Cultural Context

Authors: G. T. Ntuli

Abstract:

This paper critically examines the gender dynamics of a Zulu girl child in her religio-cultural context from the postcolonial feminist perspective. As one of the former colonized ethnic groups in the South African context, the Zulu tribe used to have particular and contextual religio-cultural ways of a girl’s upbringing. This included traditional and cultural norms that any member of the community could not infringe without serious repercussions from the community members. However, the postcolonial social position of a girl child within this community became ambiguous and unpredictable due to colonial changes that enhanced gender dynamics that propelled tribal communities into deeper patriarchal structures. In the empirical study conducted within the Zulu context, which investigated the retrieval of ubuntombi (virginity) as a Zulu cultural heritage, identity and sex education as a path to adulthood, it was found that a Zulu girl child’s social position is geared towards double oppression due to gender dynamics that she experiences in her lifetime. It is these gender dynamics that are examined in this paper from the postcolonial feminist perspective. These gender dynamics are at play from the birth of a girl child, developmental stage to puberty and marriage rituals. These rituals and religio-cultural practices are meant to shape and mold a ‘good woman’ in the Zulu cultural context but social gender inequality that elevates males over females propel women social status into life denying peripheral positions. Consequently, in the place of a ‘good woman’ in the communal view, an oppressed and dehumanized woman becomes the outcome of such gender dynamics, more often treated with contempt, despised and violated in many demeaning ways. These do not only leave women economically and socio-politically impoverished, but also having to face violence of all kinds such as domestic, emotional, sexual and gender-based violence that are increasingly becoming a scourge in some of the sub-Saharan African countries including South Africa. It is for this reason that this paper becomes significant, not only within the Zulu context where the research was conducted, but also in all the countries that practice and promote patriarchal tendencies in the name of religio-cultural practices. There is a need for a different outlook as to what it means to be a ‘good woman’ in the cultural context, because if the goodness of a woman is determined by life denying cultural practices, such practices need to be deconstructed and discarded.

Keywords: feminist, gender dynamics, postcolonial, religio-cultural, Zulu girl child

Procedia PDF Downloads 124
53 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 336
52 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 134
51 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce

Authors: Heiko Diefenbach, Christoph H. Glock

Abstract:

Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.

Keywords: an aging workforce, error prevention, order picking, storage assignment

Procedia PDF Downloads 171