Search results for: adhoc retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 338

Search results for: adhoc retrieval

68 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 201
67 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 72
66 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 178
65 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases

Authors: Hsin Lee, Hsuan Lee

Abstract:

Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.

Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases

Procedia PDF Downloads 47
64 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 158
63 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 369
62 IoT Based Soil Moisture Monitoring System for Indoor Plants

Authors: Gul Rahim Rahimi

Abstract:

The IoT-based soil moisture monitoring system for indoor plants is designed to address the challenges of maintaining optimal moisture levels in soil for plant growth and health. The system utilizes sensor technology to collect real-time data on soil moisture levels, which is then processed and analyzed using machine learning algorithms. This allows for accurate and timely monitoring of soil moisture levels, ensuring plants receive the appropriate amount of water to thrive. The main objectives of the system are twofold: to keep plants fresh and healthy by preventing water deficiency and to provide users with comprehensive insights into the water content of the soil on a daily and hourly basis. By monitoring soil moisture levels, users can identify patterns and trends in water consumption, allowing for more informed decision-making regarding watering schedules and plant care. The scope of the system extends to the agriculture industry, where it can be utilized to minimize the efforts required by farmers to monitor soil moisture levels manually. By automating the process of soil moisture monitoring, farmers can optimize water usage, improve crop yields, and reduce the risk of plant diseases associated with over or under-watering. Key technologies employed in the system include the Capacitive Soil Moisture Sensor V1.2 for accurate soil moisture measurement, the Node MCU ESP8266-12E Board for data transmission and communication, and the Arduino framework for programming and development. Additionally, machine learning algorithms are utilized to analyze the collected data and provide actionable insights. Cloud storage is utilized to store and manage the data collected from multiple sensors, allowing for easy access and retrieval of information. Overall, the IoT-based soil moisture monitoring system offers a scalable and efficient solution for indoor plant care, with potential applications in agriculture and beyond. By harnessing the power of IoT and machine learning, the system empowers users to make informed decisions about plant watering, leading to healthier and more vibrant indoor environments.

Keywords: IoT-based, soil moisture monitoring, indoor plants, water management

Procedia PDF Downloads 23
61 Effects of Intracerebroventricular Injection of Ghrelin and Aerobic Exercise on Passive Avoidance Memory and Anxiety in Adult Male Wistar Rats

Authors: Mohaya Farzin, Parvin Babaei, Mohammad Rostampour

Abstract:

Ghrelin plays a considerable role in important neurological effects related to food intake and energy homeostasis. As was found, regular physical activity may make available significant improvements to cognitive functions in various behavioral situations. Anxiety is one of the main concerns of the modern world, affecting millions of individuals’ health. There are contradictory results regarding ghrelin's effects on anxiety-like behavior, and the plasma level of this peptide is increased during physical activity. Here we aimed to evaluate the coincident effects of exogenous ghrelin and aerobic exercise on anxiety-like behavior and passive avoidance memory in Wistar rats. Forty-five male Wistar rats (250 ± 20 g) were divided into 9 groups (n=5) and received intra-hippocampal injections of 3.0 nmol ghrelin and performed aerobic exercise training for 8 weeks. Control groups received the same volume of saline and diazepam as negative and positive control groups, respectively. Learning and memory were estimated using a shuttle box apparatus, and anxiety-like behavior was recorded by an elevated plus-maze test (EPM). Data were analyzed by ANOVA test, and p<0.05 was considered significant. Our findings showed that the combined effect of ghrelin and aerobic exercise improves the acquisition, consolidation, and retrieval of passive avoidance memory in Wistar rats. Furthermore, it is supposed that the ghrelin receiving group spent less time in open arms and fewer open arms entries compared with the control group (p<0.05). However, exercising Wistar rats spent more time in the open arm zone in comparison with the control group (p<0.05). The exercise + Ghrelin administration established reduced anxiety (p<0.05). The results of this study demonstrate that aerobic exercise contributes to an increase in the endogenous production of ghrelin, and physical activity alleviates anxiety-related behaviors induced by intra-hippocampal injection of ghrelin. In general, exercise and ghrelin can reduce anxiety and improve memory.

Keywords: anxiety, ghrelin, aerobic exercise, learning, passive avoidance memory

Procedia PDF Downloads 98
60 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 322
59 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model

Authors: Ghazal Faraj, Andras Micsik

Abstract:

The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.

Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment

Procedia PDF Downloads 120
58 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 35
57 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea

Authors: Arafan Traore, Teiji Watanabe

Abstract:

Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.

Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change

Procedia PDF Downloads 217
56 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 62
55 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 199
54 Traditional and Commercially Prepared Medicine: Factors That Affect Preferences among Elderly Adults in Indigenous Community

Authors: Rhaetian Bern D. Azaula

Abstract:

The Philippines' indigenous population, estimated to be 10%-20%, is protected by the Indigenous Peoples Rights Act (IPRA), passed in 1997. However, due to their isolation and limited access to basic services such as health education or needs for health assistance, the law's implementation remains a challenge. As traditional medicine continues to play a significant role in society as the prevention and treatment of some illnesses, it is still customary and widely used to use plants in both traditional and modern ways; however, commercially prepared drugs are progressively advanced as time goes by. Therefore, the purpose of this quantitative study is to investigate the indigenous community at Barangay Magsikap General Nakar, Quezon, and analyze the factors that affect the respondent’s preferences in an indigenous community and reasons for patronizing traditional and commercially prepared medicines and proposes updated health education strategies and instructional materials. Slovin's formula was utilized to reduce the total population representation, followed by stratified sampling for proportional allocation of respondents. The study selects respondents (1) from an Indigenous Community in Barangay Magsikap, General Nakar, Quezon, (2) aged 60 and above, and (3) who are willing to participate. The researcher utilized a checklist-based questionnaire with a Tagalog version, and a Likert Scale was utilized to assess the respondent's choices on selected items. The researcher obtained informed consent from the indigenous community's regional and local office, the chieftain of the tribe, and the respondents, ensuring confidentiality in the collection and retrieval of data. The study revealed that respondents aged 60-69, males with no formal education, are unemployed and have no income source. They prefer traditional medicines due to their affordability, availability, and cultural practices but lack safe preparation, dosages, and contraindications of used medicines. Commercially prepared medications are acknowledged, but respondents are not fully aware of proper administration instructions and dosage labels. Recommendations include disseminating approved herbal medicines and ensuring proper preparation, indications, and contraindications.

Keywords: traditional medicine, commercially prepared medicine, indigenous community, elderly adult

Procedia PDF Downloads 48
53 Quick Response Codes in Physio: A Simple Click to Long-Term Oxygen Therapy Education

Authors: K. W. Lee, C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, L. Y. Chan, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow

Abstract:

QR (Quick Response) Code is a matrix barcode. It enables users to open websites, photos and other information with mobile devices by just snapping the code. In usual Long Term Oxygen Therapy arrangement, piles of LTOT related information like leaflets from different oxygen service providers are given to patients to choose an appropriate plan according to their needs. If these printed materials are transformed into electronic format (QR Code), it would be more environmentally-friendly. More importantly, electronic materials including LTOT equipment operation and dyspnoea relieving techniques also empower patients in long-term disease management. The objective to this study is to investigate the effect of QR code in patient education on new LTOT users. This study was carried out in medical wards of North District Hospital. Adult patients and relatives who followed commands, were able to use smartphones with internet services and required LTOT arrangement on hospital discharge were recruited. In LTOT arrangement, apart from the usual LTOT education booklets which included patients’ personal information (e.g. oxygen titration and six-minute walk test results etc.), extra leaflets consisted of 1. QR codes of LTOT plans from different oxygen service providers, 2. Education materials of dyspnoea management and 3. Instructions on LTOT equipment operation were given. Upon completion of LTOT arrangement, a questionnaire about the use of QR code on patient education was filled in by patients or relatives. A total of 10 new LTOT users were recruited from November 2017 to January 2018. Initially, 70% of them did not know anything about the QR code, but all of them understood its operation after a simple demonstration. 70% of them agreed that it was convenient to use (20% strongly agree, 40% agree, 10% somewhat agree). 80% of them agreed that QR code could facilitate the retrieval of more LTOT related information (10% strongly agree, 70% agree) while 90% agreed that we should continue delivering QR code leaflets to new LTOT users in the future (30% strongly agree, 40% agree, 20% somewhat agree). It is proven that QR code is a convenient and environmentally-friendly tool to deliver information. It is also relatively easy to be introduced to new users. It has received welcoming feedbacks from current users.

Keywords: long-term oxygen therapy, physiotherapy, patient education, QR code

Procedia PDF Downloads 124
52 A Postcolonial Feminist Exploration of Zulu Girl Child’s Position and Gender Dynamics in Religio-Cultural Context

Authors: G. T. Ntuli

Abstract:

This paper critically examines the gender dynamics of a Zulu girl child in her religio-cultural context from the postcolonial feminist perspective. As one of the former colonized ethnic groups in the South African context, the Zulu tribe used to have particular and contextual religio-cultural ways of a girl’s upbringing. This included traditional and cultural norms that any member of the community could not infringe without serious repercussions from the community members. However, the postcolonial social position of a girl child within this community became ambiguous and unpredictable due to colonial changes that enhanced gender dynamics that propelled tribal communities into deeper patriarchal structures. In the empirical study conducted within the Zulu context, which investigated the retrieval of ubuntombi (virginity) as a Zulu cultural heritage, identity and sex education as a path to adulthood, it was found that a Zulu girl child’s social position is geared towards double oppression due to gender dynamics that she experiences in her lifetime. It is these gender dynamics that are examined in this paper from the postcolonial feminist perspective. These gender dynamics are at play from the birth of a girl child, developmental stage to puberty and marriage rituals. These rituals and religio-cultural practices are meant to shape and mold a ‘good woman’ in the Zulu cultural context but social gender inequality that elevates males over females propel women social status into life denying peripheral positions. Consequently, in the place of a ‘good woman’ in the communal view, an oppressed and dehumanized woman becomes the outcome of such gender dynamics, more often treated with contempt, despised and violated in many demeaning ways. These do not only leave women economically and socio-politically impoverished, but also having to face violence of all kinds such as domestic, emotional, sexual and gender-based violence that are increasingly becoming a scourge in some of the sub-Saharan African countries including South Africa. It is for this reason that this paper becomes significant, not only within the Zulu context where the research was conducted, but also in all the countries that practice and promote patriarchal tendencies in the name of religio-cultural practices. There is a need for a different outlook as to what it means to be a ‘good woman’ in the cultural context, because if the goodness of a woman is determined by life denying cultural practices, such practices need to be deconstructed and discarded.

Keywords: feminist, gender dynamics, postcolonial, religio-cultural, Zulu girl child

Procedia PDF Downloads 131
51 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 343
50 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 144
49 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce

Authors: Heiko Diefenbach, Christoph H. Glock

Abstract:

Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.

Keywords: an aging workforce, error prevention, order picking, storage assignment

Procedia PDF Downloads 179
48 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model

Authors: Shreya Srivastava, Sagnik Dey

Abstract:

Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).

Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART

Procedia PDF Downloads 26
47 A Review of Type 2 Diabetes and Diabetes-Related Cardiovascular Disease in Zambia

Authors: Mwenya Mubanga, Sula Mazimba

Abstract:

Background: In Zambia, much of the focus on nutrition and health has been on reducing micronutrient deficiencies, wasting and underweight malnutrition and not on the rising global projections of trends in obesity and type 2 diabetes. The aim of this review was to identify and collate studies on the prevalence of obesity, diabetes and diabetes-related cardiovascular disease conducted in Zambia, to summarize their findings and to identify areas that need further research. Methods: The Medical Literature Analysis and Retrieval System (MEDLINE) database was searched for peer-reviewed articles on the prevalence of, and factors associated with obesity, type 2 diabetes, and diabetes-related cardiovascular disease amongst Zambian residents using a combination of search terms. The period of search was from 1 January 2000 to 31 December 2016. We expanded the search terms to include all possible synonyms and spellings obtained in the search strategy. Additionally, we performed a manual search for other articles and references of peer-reviewed articles. Results: In Zambia, the current prevalence of Obesity and Type 2 diabetes is estimated at 13%-16% and 2.0 – 3.0% respectively. Risk factors such as the adoption of western dietary habits, the social stigmatization associated with rapid weight loss due to Tuberculosis and/ or the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and rapid urbanization have all been blamed for fueling the increased risk of obesity and type 2 diabetes. However, unlike traditional Western populations, those with no formal education were less likely to be obese than those who attained secondary or tertiary level education. Approximately 30% of those surveyed were unaware of their diabetes diagnosis and more than 60% were not on treatment despite a known diabetic status. Socio-demographic factors such as older age, female sex, urban dwelling, lack of tobacco use and marital status were associated with an increased risk of obesity, impaired glucose tolerance and type 2 diabetes. We were unable to identify studies that specifically looked at diabetes-related cardiovascular disease. Conclusion: Although the prevalence of Obesity and Type 2 diabetes in Zambia appears low, more representative studies focusing on parts of the country outside of the main industrial zone need to be conducted. There also needs to be research on diabetes-related cardiovascular disease. National surveillance, monitoring and evaluation on all non-communicable diseases need to be prioritized and policies that address underweight, obesity and type 2 diabetes developed.

Keywords: type 2 diabetes, Zambia, obesity, cardiovascular disease

Procedia PDF Downloads 214
46 Ubuntombi (Virginity) Among the Zulus: An Exploration of a Cultural Identity and Difference from a Postcolonial Feminist Perspective

Authors: Goodness Thandi Ntuli

Abstract:

The cultural practice of ubuntombi (virginity) among the Zulus is not easily understood from the outside of its cultural context. The empirical study that was conducted through the interviews and focus group discussions about the retrieval of ubuntombi as a cultural practice within the Zulu cultural community indicated that there is a particular cultural identity and difference that can be unearthed from this cultural practice. Being explored from the postcolonial feminist perspective, this cultural identity and difference is discerned in the way in which a Zulu young woman known as intombi (virgin) exercises her power and authority over her own sexuality. Taking full control of her own sexuality from the cultural viewpoint enables her not only to exercise her uniqueness in the midst of multiculturalism and pluralism but also to assert her cultural identity of being intombi. The assertion of the Zulu young woman’s cultural identity does not only empower her to stand on her life principles but also empowers her to lift herself up from the margins of the patriarchal society that otherwise would have kept her on the periphery. She views this as an opportunity for self-development and enhancement through educational opportunities that will enable her to secure a future with financial independence. The underlying belief is that once she has been educationally successful, she would secure a better job opportunity that will enable her to be self-sufficient and not to rely on any male provision for her sustenance. In this, she stands better chances of not being victimized by social patriarchal influences that generally keep women at the bottom of the socio-economic and political ladder. Consequently, ubuntombi (virginity) as a Zulu heritage and cultural identity becomes instrumental in the empowerment of the young women who choose this cultural practice as their adopted lifestyle. In addition, it is the kind of self-empowerment with the intrinsic motivation that works with the innate ability to resist any distraction from an individual’s set goals. It is thus concluded that this kind of motivation is a rare characteristic of the achievers in life. Once these young women adhere to their specified life principles, nothing can stop them from achieving the dreams of their hearts. This includes socio-economic autonomy that will ensure their liberation and emancipation as women in the midst of social and patriarchal challenges that militate against them in the hostile communities of their residence. Another hidden achievement would be to turn around the perception of being viewed as the “other”; instead, they will have to be viewed differently. Their difference lies in the turning around of the archaic kind of cultural practice into a modern tool of self-development and enhancement in contemporary society.

Keywords: cultural, difference, identity, postcolonial, ubuntombi, zulus

Procedia PDF Downloads 160
45 Barriers and Facilitators for Telehealth Use during Cervical Cancer Screening and Care: A Literature Review

Authors: Reuben Mugisha, Stella Bakibinga

Abstract:

The cervical cancer burden is a global threat, but more so in low income settings where more than 85% of mortality cases occur due to lack of sufficient screening programs. There is consequently a lack of early detection of cancer and precancerous cells among women. Studies show that 3% to 35% of deaths could have been avoided through early screening depending on prognosis, disease progression, environmental and lifestyle factors. In this study, a systematic literature review is undertaken to understand potential barriers and facilitators as documented in previous studies that focus on the application of telehealth in cervical cancer screening programs for early detection of cancer and precancerous cells. The study informs future studies especially those from low income settings about lessons learned from previous studies and how to be best prepared while planning to implement telehealth for cervical cancer screening. It further identifies the knowledge gaps in the research area and makes recommendations. Using a specified selection criterion, 15 different articles are analyzed based on the study’s aim, theory or conceptual framework used, method applied, study findings and conclusion. Results are then tabulated and presented thematically to better inform readers about emerging facts on barriers and facilitators to telehealth implementation as documented in the reviewed articles, and how they consequently lead to evidence informed conclusions that are relevant to telehealth implementation for cervical cancer screening. Preliminary findings of this study underscore that use of low cost mobile colposcope is an appealing option in cervical cancer screening, particularly when coupled with onsite treatment of suspicious lesions. These tools relay cervical images to the online databases for storage and retrieval, they permit integration of connected devices at the point of care to rapidly collect clinical data for further analysis of the prevalence of cervical dysplasia and cervical cancer. Results however reveal the need for population sensitization prior to use of mobile colposcopies among patients, standardization of mobile colposcopy programs across screening partners, sufficient logistics and good connectivity, experienced experts to review image cases at the point-of-care as important facilitators to the implementation of mobile colposcope as a telehealth cervical cancer screening mechanism.

Keywords: cervical cancer screening, digital technology, hand-held colposcopy, knowledge-sharing

Procedia PDF Downloads 197
44 A Study Investigating Word Association Behaviour in People with Acquired Language and Communication Disorders

Authors: Angela Maria Fenu

Abstract:

The aim of this study was to better characterize the nature of word association responses in people with aphasia. The participants selected for the experimental group were 4 individuals with mild Broca’s aphasia. The control group consisted of 51 cognitively intact age- and gender-matched individuals. The participants were asked to perform a word association task in which they had to say the first word they thought of when hearing each cue. The cue words (n= 16) were the translation in Italian of the set of English cue words of a published study. The participants from the experimental group were administered the word association test every two weeks for a period of two months when they received speech-language therapy A combination of analytical approaches to measure the data was used. To analyse different patterns of word association responses in both groups, the nature of the relationship between the cue and the response was examined: responses were divided into five categories of association. To investigate the similarity between aphasic and non-aphasic subjects, the stereotypy of responses was examined.While certain stimulus words (nouns, adjectives) elicited responses from Broca’s aphasics that tended to resemble those made by non-aphasic subjects; others (adverbs, verbs) showed the tendency to elicit responses different from the ones given by normal subjects. This suggests that some mechanisms underlying certain types of associations are degraded in aphasics individuals, while others display little evidence of disruption. The high number of paradigmatic associations given in response to a noun or an adjective might imply that the mechanisms, largely semantic, underlying paradigmatic associations are relatively preserved in Broca’s aphasia, but it might also mean that some words are more easily processed depending on their grammatical class (nouns, adjectives). The most significant variation was noticed when the grammatical class of the cue word was an adverb. Unlike the normal individuals, the experimental subjects gave the most idiosyncratic associations, which are often produced when the attempt to give a paradigmatic response fails. In turn, the failure to retrieve paradigmatic responses when the cue is an adverb might suggest that Broca’s aphasics are more sensitive to this grammatical class.The findings from this study suggest that, from research on word associations in people with aphasia, important data can arise concerning the specific lexical retrieval impairments that characterize the different types of aphasia and the various treatments that might positively influence the kinds of word association responses affected by language disruption.

Keywords: aphasia therapy, clinical linguistics, word-association behaviour, mental lexicon

Procedia PDF Downloads 56
43 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 73
42 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 92
41 Evaluation of Diagnostic Values of Culture, Rapid Urease Test, and Histopathology in the Diagnosis of Helicobacter pylori Infection and in vitro Effects of Various Antimicrobials against Helicobacter pylori

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study, was to investigate the presence of Helicobacter pylori (H. pylori) infection by culture, histology, and RUT (Rapid Urease Test) in gastric antrum biopsy samples taken from patients presented with dyspeptic complaints and to determine resistance rates of amoxicillin, clarithromycin, levofloxacin and metronidazole against the H. pylori strains by E-test. Material and Methods: A total of 278 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-July 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in biopsy samples was investigated by culture (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (Giemsa, Hematoxylin and Eosin staining), and RUT(CLOtest, Cimberly-Clark, USA). Antimicrobial resistance of isolates against amoxicillin, clarithromycin, levofloxacin, and metronidazole was determined by E-test method (bioMerieux, France). As a gold standard in the diagnosis of H. pylori; it was accepted that the culture method alone was positive or both histology and RUT were positive together. Sensitivity and specificity for histology and RUT were calculated by taking the culture as a gold standard. Sensitivity and specificity for culture were also calculated by taking the co-positivity of both histology and RUT as a gold standard. Results: H. pylori was detected in 140 of 278 of patients with culture and 174 of 278 of patients with histology in the study. H. pylori positivity was also found in 191 patients with RUT. According to the gold standard criteria, a false negative result was found in 39 cases by culture method, 17 cases by histology, and 8 cases by RUT. Sensitivity and specificity of the culture, histology, and RUT methods of the patients were 76.5 % and 88.3 %, 87.8 % and 63 %, 94.2 % and 57.2 %, respectively. Antibiotic resistance was investigated by E-test in 140 H. pylori strains isolated from culture. The resistance rates of H. pylori strains to the amoxicillin, clarithromycin, levofloxacin, and metronidazole was detected as 9 (6.4 %), 22 (15.7 %), 17 (12.1 %), 57 (40.7 %), respectively. Conclusion: In our study, RUT was found to be the most sensitive, culture was the most specific test between culture, histology, and RUT methods. Although we detected the specificity of the culture method as high, its sensitivity was found to be quite low compared to other methods. The low sensitivity of H. pylori culture may be caused by the factors affect the chances of direct isolation such as spoild bacterium, difficult-to-breed microorganism, clinical sample retrieval, and transport conditions.

Keywords: antimicrobial resistance, culture, histology, H. pylori, RUT

Procedia PDF Downloads 142
40 Electronic Waste Analysis And Characterization Study: Management Input For Highly Urbanized Cities

Authors: Jilbert Novelero, Oliver Mariano

Abstract:

In a world where technological evolution and competition to create innovative products are at its peak, problems on Electronic Waste (E-Waste) are now becoming a global concern. E-waste is said to be any electrical or electronic devices that have reached the terminal of its useful life. The major issue are the volume and the raw materials used in crafting E-waste which is non-biodegradable and contains hazardous substances that are toxic to human health and the environment. The objective of this study is to gather baseline data in terms of the composition of E-waste in the solid waste stream and to determine the top 5 E-waste categories in a highly urbanized city. Recommendations in managing these wastes for its reduction were provided which may serve as a guide for acceptance and implementation in the locality. Pasig City was the chosen beneficiary of the research output and through the collaboration of the City Government of Pasig and its Solid Waste Management Office (SWMO); the researcher successfully conducted the Electronic Waste Analysis and Characterization Study (E-WACS) to achieve the objectives. E-WACS that was conducted on April 2019 showed that E-waste ranked 4th which comprises the 10.39% of the overall solid waste volume. Out of 345, 127.24kg which is the total daily domestic waste generation in the city, E-waste covers 35,858.72kg. Moreover, an average of 40 grams was determined to be the E-waste generation per person per day. The top 5 E-waste categories were then classified after the analysis. The category which ranked first is the office and telecommunications equipment that contained the 63.18% of the total generated E-waste. Second in ranking was the household appliances category with 21.13% composition. Third was the lighting devices category with 8.17%. Fourth on ranking was the consumer electronics and batteries category which was composed of 5.97% and fifth was the wires and cables category where it comprised the 1.41% of the average generated E-waste samples. One of the recommendations provided in this research is the implementation of the Pasig City Waste Advantage Card. The card can be used as a privilege card and earned points can be converted to avail of and enjoy services such as haircut, massage, dental services, medical check-up, and etc. Another recommendation raised is for the LGU to encourage a communication or dialogue with the technology and electronics manufacturers and distributors and international and local companies to plan the retrieval and disposal of the E-wastes in accordance with the Extended Producer Responsibility (EPR) policy where producers are given significant responsibilities for the treatment and disposal of post-consumer products.

Keywords: E-waste, E-WACS, E-waste characterization, electronic waste, electronic waste analysis

Procedia PDF Downloads 97
39 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 142