Search results for: automated document processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4970

Search results for: automated document processing

3080 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 124
3079 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models

Authors: Ozan Kahraman, Hao Feng

Abstract:

Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.

Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7

Procedia PDF Downloads 351
3078 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 334
3077 Automated Distribution System Management: Substation Remote Diagnostic and Operation Solution for Obafemi Awolowo University

Authors: Aderonke Oluseun Akinwumi, Olusola A. Komolaf

Abstract:

This paper gives information about the wide array of challenges facing both the electric utilities and consumers in the distribution system in developing countries, using Obafemi Awolowo University, Ile-Ife Nigeria as a case study. It also proffers cost-effective solution through remote monitoring, diagnostic and operation of distribution networks without compromising the system reliability. As utilities move from manned and unintelligent networks to completely unmanned smart grids, switching activities at substations and feeders will be managed and controlled remotely by dedicated systems hence this design. The Substation Remote Diagnostic and Operation Solution (sRDOs) would remotely monitor the load on Medium Voltage (MV) and Low Voltage (LV) feeders as well as distribution transformers and allow the utility disconnect non-paying customers with absolutely no extra resource deployment and without interrupting supply to paying customers. The aftermath of the implementation of this design improved the lifetime of key distribution infrastructure by automatically isolating feeders during overload conditions and more importantly erring consumers. This increased the ratio of revenue generated on electricity bills to total network load.

Keywords: electric utility, consumers, remote monitoring, diagnostic, system reliability, manned and unintelligent networks, unmanned smart grids, switching activities, medium voltage, low voltage, distribution transformer

Procedia PDF Downloads 114
3076 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 64
3075 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 105
3074 Friction Stir Processing of the AA7075T7352 Aluminum Alloy Microstructures Mechanical Properties and Texture Characteristics

Authors: Roopchand Tandon, Zaheer Khan Yusufzai, R. Manna, R. K. Mandal

Abstract:

Present work describes microstructures, mechanical properties, and texture characteristics of the friction stir processed AA7075T7352 aluminum alloy. Phases were analyzed with the help of x-ray diffractometre (XRD), transmission electron microscope (TEM) along with the differential scanning calorimeter (DSC). Depth-wise microstructures and dislocation characteristics from the nugget-zone of the friction stir processed specimens were studied using the bright field (BF) and weak beam dark-field (WBDF) TEM micrographs, and variation in the microstructures as well as dislocation characteristics were the noteworthy features found. XRD analysis display changes in the chemistry as well as size of the phases in the nugget and heat affected zones (Nugget and HAZ). Whereas the base metal (BM) microstructures remain un-affected. High density dislocations were noticed in the nugget regions of the processed specimen, along with the formation of dislocation contours and tangles. .The ɳ’ and ɳ phases, along with the GP-Zones were completely dissolved and trapped by the dislocations. Such an observations got corroborated to the improved mechanical as well as stress corrosion cracking (SCC) performances. Bulk texture and residual stress measurements were done by the Panalytical Empyrean MRD system with Co- kα radiation. Nugget zone (NZ) display compressive residual stress as compared to thermo-mechanically(TM) and heat affected zones (HAZ). Typical f.c.c. deformation texture components (e.g. Copper, Brass, and Goss) were seen. Such a phenomenon is attributed to the enhanced hardening as well as other mechanical performance of the alloy. Mechanical characterizations were done using the tensile test and Anton Paar Instrumented Micro Hardness tester. Enhancement in the yield strength value is reported from the 89MPa to the 170MPa; on the other hand, highest hardness value was reported in the nugget-zone of the processed specimens.

Keywords: aluminum alloy, mechanical characterization, texture characterstics, friction stir processing

Procedia PDF Downloads 84
3073 Detecting Hate Speech And Cyberbullying Using Natural Language Processing

Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão

Abstract:

Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.

Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning

Procedia PDF Downloads 209
3072 Exploring the Cultural Significance of Mural Paintings in the Tombs of Gilan, Iran: Evaluation of Drawn Figures

Authors: Zeinab Mirabulqasemi, Gholamali Hatam

Abstract:

This article discusses the significance of mural paintings in Iranian culture, particularly within the context of religious tombs known as Imamzadehs. These tombs, dedicated to Shiite imams and other revered religious figures, serve as important religious and communal spaces. The tradition of tomb construction evolved from early Islamic practices, gradually transforming burial sites into places of worship. In the Gilan region of Iran, these tombs hold a revered status, serving as focal points for religious observances and social gatherings. The murals adorning these tombs often depict religious motifs, with a particular emphasis on events like the Day of Judgment and the martyrdom of the Imams, notably the saga of Ashura. These paintings also reflect the community's social perspectives and historical allegiances. Various architectural styles are employed in constructing these tombs, including Islamic, traditional, local, and aesthetic architecture. However, the region's climate poses challenges to the preservation of these structures and their murals. Despite these challenges, efforts are made to document and preserve these artworks to ensure their accessibility for future generations. This research also studies tomb paintings by adopting a multifaceted approach, including library research, image analysis, and field research. Finally, it examines the portrayal of significant figures such as the Shiite imams, prophets, and Imamzadehs within these murals, highlighting their thematic significance and cultural importance.

Keywords: cultural ritual, Shiite imams, mural, belief foundations, religious paintings

Procedia PDF Downloads 34
3071 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 374
3070 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU

Authors: David Lloyd, David Owen, Martin Jardine

Abstract:

Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.

Keywords: legislation, overfill, profile, waste

Procedia PDF Downloads 388
3069 Legal Contestation of Non-Legal Norms: The Case of Humanitarian Intervention Norm between 1999 and 2018

Authors: Nazli Ustunes Demirhan

Abstract:

Norms of any nature are subject to pressures of change throughout their lifespans, as they are interpreted and re-interpreted every time they are used rhetorically or practically by international actors. The inevitable contestation of different interpretations may lead to an erosion of the norm, as well as to its strengthening. This paper aims to question the role of formal legality on the change of norm strength, using a norm contestation framework and a multidimensional norm strength conceptualization. It argues that the role of legality is not necessarily linked to the formal legal characteristics of a norm, but is about the legality of the contestation processes. In order to demonstrate this argument, the paper examines the evolutionary path of the humanitarian intervention norm as a case study. Humanitarian intervention, as a norm of very low formal legal characteristics, has been subject to numerous cycles of contestation, demonstrating a fluctuating pattern of norm strength. With the purpose of examining the existence and role of legality in the selected contestation periods from 1999 to 2017, this paper uses process tracing method with a detailed document analysis on the Security Council documents; including decisions, resolutions, meeting minutes, press releases as well as individual country statements. Through the empirical analysis, it is demonstrated that the legality of the contestation processes has a positive effect at least on the authoritativeness dimension of norm strength. This study tries to contribute to the developing dialogue between international relations (IR) and internal law (IL) disciplines with its better-tuned understanding of legality. It connects to further questions in IR/IL nexus, relating to the value added of norm legality, and politics of legalization as well as better international policies for norm reinforcement.

Keywords: humanitarian intervention, legality, norm contestation, norm dynamics, responsibility to protect

Procedia PDF Downloads 134
3068 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography

Authors: B.Shukir, H.Woo, P.Barzo, D.Kis

Abstract:

Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.

Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography

Procedia PDF Downloads 144
3067 Greening the Blue: Enzymatic Degradation of Commercially Important Biopolymer Dextran Using Dextranase from Bacillus Licheniformis KIBGE-IB25

Authors: Rashida Rahmat Zohra, Afsheen Aman, Shah Ali Ul Qader

Abstract:

Commercially important biopolymer, dextran, is enzymatically degraded into lower molecular weight fractions of vast industrial potential. Various organisms are associated with dextranase production, among which fungal, yeast and bacterial origins are used for commercial production. Dextranases are used to remove contaminating dextran in sugar processing industry and also used in oral care products for efficient removal of dental plaque. Among the hydrolytic products of dextran, isomaltooligosaccharides have prebiotic effect in humans and reduces the cariogenic effect of sucrose in oral cavity. Dextran derivatives produced by hydrolysis of high molecular polymer are also conjugated with other chemical and metallic compounds for usage in pharmaceutical, fine chemical industry, cosmetics, and food industry. Owing to the vast application of dextran and dextranases, current study focused on purification and analysis of kinetic parameters of dextranase from a newly isolated strain of Bacillus licheniformis KIBGE-IB25. Dextranase was purified up to 35.75 folds with specific activity of 1405 U/mg and molecular weight of 158 kDa. Analysis of kinetic parameters revealed that dextranase performs optimum cleavage of low molecular weight dextran (5000 Da, 0.5%) at 35ºC in 15 min at pH 4.5 with a Km and Vmax of 0.3738 mg/ml and 182.0 µmol/min, respectively. Thermal stability profiling of dextranase showed that it retained 80% activity up to 6 hours at 30-35ºC and remains 90% active at pH 4.5. In short, the dextranase reported here performs rapid cleavage of substrate at mild operational conditions which makes it an ideal candidate for dextran removal in sugar processing industry and for commercial production of low molecular weight oligosaccharides.

Keywords: Bacillus licheniformis, dextranase, gel permeation chromatograpy, enzyme purification, enzyme kinetics

Procedia PDF Downloads 427
3066 Executive Deficits in Non-Clinical Hoarders

Authors: Thomas Heffernan, Nick Neave, Colin Hamilton, Gill Case

Abstract:

Hoarding is the acquisition of and failure to discard possessions, leading to excessive clutter and significant psychological/emotional distress. From a cognitive-behavioural approach, excessive hoarding arises from information-processing deficits, as well as from problems with emotional attachment to possessions and beliefs about the nature of possessions. In terms of information processing, hoarders have shown deficits in executive functions, including working memory, planning, inhibitory control, and cognitive flexibility. However, this previous research is often confounded by co-morbid factors such as anxiety, depression, or obsessive-compulsive disorder. The current study adopted a cognitive-behavioural approach, specifically assessing executive deficits and working memory in a non-clinical sample of hoarders, compared with non-hoarders. In this study, a non-clinical sample of 40 hoarders and 73 non-hoarders (defined by The Savings Inventory-Revised) completed the Adult Executive Functioning Inventory, which measures working memory and inhibition, Dysexecutive Questionnaire-Revised, which measures general executive function and the Hospital Anxiety and Depression Scale, which measures mood. The participant sample was made up of unpaid young adult volunteers who were undergraduate students and who completed the questionnaires on a university campus. The results revealed that, after observing no differences between hoarders and non-hoarders on age, sex, and mood, hoarders reported significantly more deficits in inhibitory control and general executive function when compared with non-hoarders. There was no between-group difference on general working memory. This suggests that non-clinical hoarders have a specific difficulty with inhibition-control, which enables you to resist repeated, unwanted urges. This might explain the hoarder’s inability to resist urges to buy and keep items that are no longer of any practical use. These deficits may be underpinned by general executive function deficiencies.

Keywords: hoarding, memory, executive, deficits

Procedia PDF Downloads 178
3065 Determination of the Stability of Haloperidol Tablets and Phenytoin Capsules Stored in the Inpatient Dispensary System (Swisslog) by the Respective HPLC and Raman Spectroscopy Assay

Authors: Carol Yue-En Ong, Angelina Hui-Min Tan, Quan Liu, Paul Chi-Lui Ho

Abstract:

A public general hospital in Singapore has recently implemented an automated unit-dose machine in their inpatient dispensary, Swisslog, with the objective of reducing human error and improving patient safety. However, a concern in stability arises as tablets are removed from their original packaging (bottled loose tablets/capsules) and are repackaged into individual, clear plastic wrappers as unit doses in the system. Drugs that are light-sensitive and hygroscopic would be more susceptible to degradation as the wrapper does not offer full protection. Hence, this study was carried out to study the stability of haloperidol tablets and phenytoin capsules that are light-sensitive and hygroscopic respectively. Validated HPLC-UV assays were first established for quantification of these two compounds. The medications involved were put in the Swisslog and sampled every week for one month. The collected data was analysed and showed no degradation over time. This study also explored an alternative approach for drug stability determination-Raman spectroscopy. The advantage of Raman spectroscopy is its high time efficiency and non-destructive nature. The results suggest that drug degradation can indeed be detected using Raman microscopy, but further research is needed to establish this approach for quantification or qualification of compounds. NanoRam®, a portable Raman spectrocope was also used alongside Raman microscopy but was unsuccessful in detecting degradation in this study.

Keywords: drug stability, haloperidol, HPLC, phenytoin, raman spectroscopy, Swisslog

Procedia PDF Downloads 325
3064 Higher Education for Knowledge and Technology Transfer in Egypt

Authors: M. A. Zaki Ewiss, S. Afifi

Abstract:

Nahda University (NUB) believes that internationalisation of higher educational is able to provide global society with an education that meets current needs and that can respond efficiently to contemporary demands and challenges, which are characterized by globalisation, interdependence, and multiculturalism. In this paper, we will discuss the the challenges of the Egyptian Higher Education system and the future vision to improve this system> In this report, the following issues will be considered: Increasing knowledge on the development of specialized programs of study at the university. Developing international cooperation programs, which focus on the development of the students and staff skills, and providing academic culture and learning opportunities. Increasing the opportunities for student mobility, and research projects for faculty members. Increased opportunities for staff, faculty and students to continue to learn foreign universities, and to benefit from scholarships in various disciplines. Taking the advantage of the educational experience and modern teaching methods; Providing the opportunities to study abroad without increasing the period of time required for graduation, and through greater integration in the curricula and programs; More cultural interaction through student exchanges.Improving and providing job opportunities for graduates through participation in the global labor market. This document sets out NUB strategy to move towards that vision. We are confident that greater explicit differentiation, greater freedom and greater collaboration are the keys to delivering the further improvement in quality we shall need to retain and strengthen our position as one of the world’s leading higher education systems.

Keywords: technology transfer higher education, knowledge transfer, internationalisation, mobility

Procedia PDF Downloads 425
3063 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: fermentation, hydrous bioethanol, fermentation, rain tree pods, village level

Procedia PDF Downloads 273
3062 Interoperability Standard for Data Exchange in Educational Documents in Professional and Technological Education: A Comparative Study and Feasibility Analysis for the Brazilian Context

Authors: Giovana Nunes Inocêncio

Abstract:

The professional and technological education (EPT) plays a pivotal role in equipping students for specialized careers, and it is imperative to establish a framework for efficient data exchange among educational institutions. The primary focus of this article is to address the pressing need for document interoperability within the context of EPT. The challenges, motivations, and benefits of implementing interoperability standards for digital educational documents are thoroughly explored. These documents include EPT completion certificates, academic records, and curricula. In conjunction with the prior abstract, it is evident that the intersection of IT governance and interoperability standards holds the key to transforming the landscape of technical education in Brazil. IT governance provides the strategic framework for effective data management, aligning with educational objectives, ensuring compliance, and managing risks. By adopting interoperability standards, the technical education sector in Brazil can facilitate data exchange, enhance data security, and promote international recognition of qualifications. The utilization of the XML (Extensible Markup Language) standard further strengthens the foundation for structured data exchange, fostering efficient communication, standardization of curricula, and enhancing educational materials. The IT governance, interoperability standards, and data management critical role in driving the quality, efficiency, and security of technical education. The adoption of these standards fosters transparency, stakeholder coordination, and regulatory compliance, ultimately empowering the technical education sector to meet the dynamic demands of the 21st century.

Keywords: interoperability, education, standards, governance

Procedia PDF Downloads 57
3061 Bilingual Experience Influences Different Components of Cognitive Control: Evidence from fMRI Study

Authors: Xun Sun, Le Li, Ce Mo, Lei Mo, Ruiming Wang, Guosheng Ding

Abstract:

Cognitive control plays a central role in information processing, which is comprised of various components including response suppression and inhibitory control. Response suppression is considered to inhibit the irrelevant response during the cognitive process; while inhibitory control to inhibit the irrelevant stimulus in the process of cognition. Both of them undertake distinct functions for the cognitive control, so as to enhance the performances in behavior. Among numerous factors on cognitive control, bilingual experience is a substantial and indispensible factor. It has been reported that bilingual experience can influence the neural activity of cognitive control as whole. However, it still remains unknown how the neural influences specifically present on the components of cognitive control imposed by bilingualism. In order to explore the further issue, the study applied fMRI, used anti-saccade paradigm and compared the cerebral activations between high and low proficient Chinese-English bilinguals. Meanwhile, the study provided experimental evidence for the brain plasticity of language, and offered necessary bases on the interplay between language and cognitive control. The results showed that response suppression recruited the middle frontal gyrus (MFG) in low proficient Chinese-English bilinguals, but the inferior patrietal lobe in high proficient Chinese-English bilinguals. Inhibitory control engaged the superior temporal gyrus (STG) and middle temporal gyrus (MTG) in low proficient Chinese-English bilinguals, yet the right insula cortex was more active in high proficient Chinese-English bilinguals during the process. These findings illustrate insights that bilingual experience has neural influences on different components of cognitive control. Compared with low proficient bilinguals, high proficient bilinguals turn to activate advanced neural areas for the processing of cognitive control. In addition, with the acquisition and accumulation of language, language experience takes effect on the brain plasticity and changes the neural basis of cognitive control.

Keywords: bilingual experience, cognitive control, inhibition control, response suppression

Procedia PDF Downloads 472
3060 Fold and Thrust Belts Seismic Imaging and Interpretation

Authors: Sunjay

Abstract:

Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.

Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation

Procedia PDF Downloads 53
3059 iCount: An Automated Swine Detection and Production Monitoring System Based on Sobel Filter and Ellipse Fitting Model

Authors: Jocelyn B. Barbosa, Angeli L. Magbaril, Mariel T. Sabanal, John Paul T. Galario, Mikka P. Baldovino

Abstract:

The use of technology has become ubiquitous in different areas of business today. With the advent of digital imaging and database technology, business owners have been motivated to integrate technology to their business operation ranging from small, medium to large enterprises. Technology has been found to have brought many benefits that can make a business grow. Hog or swine raising, for example, is a very popular enterprise in the Philippines, whose challenges in production monitoring can be addressed through technology integration. Swine production monitoring can become a tedious task as the enterprise goes larger. Specifically, problems like delayed and inconsistent reports are most likely to happen if counting of swine per pen of which building is done manually. In this study, we present iCount, which aims to ensure efficient swine detection and counting that hastens the swine production monitoring task. We develop a system that automatically detects and counts swine based on Sobel filter and ellipse fitting model, given the still photos of the group of swine captured in a pen. We improve the Sobel filter detection result through 8-neigbhorhood rule implementation. Ellipse fitting technique is then employed for proper swine detection. Furthermore, the system can generate periodic production reports and can identify the specific consumables to be served to the swine according to schedules. Experiments reveal that our algorithm provides an efficient way for detecting swine, thereby providing a significant amount of accuracy in production monitoring.

Keywords: automatic swine counting, swine detection, swine production monitoring, ellipse fitting model, sobel filter

Procedia PDF Downloads 299
3058 Multi-objective Rationality Optimisation for Robotic-fabrication-oriented Free-form Timber Structure Morphology Design

Authors: Yiping Meng, Yiming Sun

Abstract:

The traditional construction industry is unable to meet the requirements for novel fabrication and construction. Automated construction and digital design have emerged as industry development trends that compensate for this shortcoming under the backdrop of Industrial Revolution 4.0. Benefitting from more flexible working space and more various end-effector tools compared to CNC methods, robot fabrication and construction techniques have been used in irregular architectural design. However, there is a lack of a systematic and comprehensive design and optimisation workflow considering geometric form, material, and fabrication methods. This paper aims to propose a design optimisation workflow for improving the rationality of a free-form timber structure fabricated by the robotic arm. Firstly, the free-form surface is described by NURBS, while its structure is calculated using the finite element analysis method. Then, by considering the characteristics and limiting factors of robotic timber fabrication, strain energy and robustness are set as optimisation objectives to optimise structural morphology by gradient descent method. As a result, an optimised structure with axial force as the main force and uniform stress distribution is generated after the structure morphology optimisation process. With the decreased strain energy and the improved robustness, the generated structure's bearing capacity and mechanical properties have been enhanced. The results prove the feasibility and effectiveness of the proposed optimisation workflow for free-form timber structure morphology design.

Keywords: robotic fabrication, free-form timber structure, Multi-objective optimisation, Structural morphology, rational design

Procedia PDF Downloads 181
3057 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection

Authors: Weihao Wang, Zhulin Zong

Abstract:

Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.

Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals

Procedia PDF Downloads 60
3056 A Comparative Study of European Terrazzo and Tibetan Arga Floor Making Techniques

Authors: Hubert Feiglstorfer

Abstract:

The technique of making terrazzo has been known since ancient times. During the Roman Empire, known as opus signinum, at the time of the Renaissance, known as composto terrazzo marmorino or at the turn of the 19th and 20th centuries, the use of terrazzo experienced a common use in Europe. In Asia, especially in the Himalayas and the Tibetan highlands, a particular floor and roof manufacturing technique is commonly used for about 1500 years, known as arga. The research question in this contribution asks for technical and cultural-historical synergies of these floor-making techniques. The making process of an arga floor shows constructive parallels to the European terrazzo. Surface processing by grinding, burnishing and sealing, in particular, reveals technological similarities. The floor structure itself, on the other hand, shows differences, for example in the use of hydraulic aggregate in the terrazzo, while the arga floor is used without hydraulic material, but the result of both techniques is a tight, water-repellent and shiny surface. As part of this comparative study, the materials, processing techniques and quality features of the two techniques are compared and parallels and differences are analysed. In addition to text and archive research, the methods used are results of material analyses and ethnographic research such as participant observation. Major findings of the study are the investigation of the mineralogical composition of arga floors and its comparison with terrazzo floors. The study of the cultural-historical context in which both techniques are embedded will give insight into technical developments in Europe and Asia, parallels and differences. Synergies from this comparison let possible technological developments in the production, conservation and renovation of European terrazzo floors appear in a new light. By making arga floors without cement-based aggregates, the renovation of historical floors from purely natural products and without using energy by means of a burning process can be considered.

Keywords: European and Asian crafts, material culture, floor making technology, terrazzo, arga, Tibetan building traditions

Procedia PDF Downloads 216
3055 Assessment of Image Databases Used for Human Skin Detection Methods

Authors: Saleh Alshehri

Abstract:

Human skin detection is a vital step in many applications. Some of the applications are critical especially those related to security. This leverages the importance of a high-performance detection algorithm. To validate the accuracy of the algorithm, image databases are usually used. However, the suitability of these image databases is still questionable. It is suggested that the suitability can be measured mainly by the span the database covers of the color space. This research investigates the validity of three famous image databases.

Keywords: image databases, image processing, pattern recognition, neural networks

Procedia PDF Downloads 248
3054 Frequency and Factors Associated with Thyroid Dysfunction: A Descriptive Cross-Sectional Study from a Tertiary Care Center in Kabul, Afghanistan

Authors: Mohammad Naeem Lakanwall, Jamshid Abdul-Ghafar

Abstract:

Background: Endocrinopathies are a commonly occurring entity, particularly those of the thyroid gland; however, there is a lack of scientific literature from Afghanistan, a country with very limited health care facilities and resources. To our best knowledge, this is the first study aimed to describe the frequency of occurrence and factors associated with thyroid dysfunction in the Afghan population. The aim of this study is to estimate the frequency and to identify factors associated with thyroid dysfunction among individuals coming to a tertiary care facility in Kabul, Afghanistan. Methods: A cross-sectional study was conducted from July to Sep 2018 at the Department of Clinical Pathology, French Medical Institute for Mothers and Children (FMIC), Kabul, Afghanistan. Blood samples were obtained, serum TSH levels were analyzed, and the patients were divided into three diagnostic categories according to their serum TSH concentrations: 1) hypothyroidism, 2) hyperthyroidism, 3) normal. Results: A total of 127 individuals were included in the final analysis. The majority of study participants (77%) were females. A large number of the participants (92%) did not have a family history of thyroid dysfunction. 74% of the participants in the study had normal TSH levels classified as normal thyroid function, (14%) had lower TSH levels, and (12%) higher TSH levels, classified as hyper and hypothyroid, respectively. Conclusions: The findings of the current study showed a high frequency of thyroid dysfunctions from a single center. Further large-scale studies are needed to find out the prevalence and document this entity for better health outcomes in the country.

Keywords: Afghanistan, factors, frequency, hypothyroid, hyperthyroid, thyroid, thyroid stimulating hormone

Procedia PDF Downloads 157
3053 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 245
3052 Sustainability Assessment of Food Delivery with Last-Mile Delivery Droids, A Case Study at the European Commission's JRC Ispra Site

Authors: Ada Garus

Abstract:

This paper presents the outcomes of the sustainability assessment of food delivery with a last-mile delivery service introduced in a real-world case study. The methodology used in the sustainability assessment integrates multi-criteria decision-making analysis, sustainability pillars, and scenario analysis to best reflect the conflicting needs of stakeholders involved in the last mile delivery system. The case study provides an application of the framework to the food delivery system of the Joint Research Centre of the European Commission where three alternative solutions were analyzed I) the existent state in which individuals frequent the local cantine or pick up their food, using their preferred mode of transport II) the hypothetical scenario in which individuals can only order their food using the delivery droid system III) a scenario in which the food delivery droid based system is introduced as a supplement to the current system. The environmental indices are calculated using a simulation study in which decision regarding the food delivery is predicted using a multinomial logit model. The vehicle dynamics model is used to predict the fuel consumption of the regular combustion engines vehicles used by the cantine goers and the electricity consumption of the droid. The sustainability assessment allows for the evaluation of the economic, environmental, and social aspects of food delivery, making it an apt input for policymakers. Moreover, the assessment is one of the first studies to investigate automated delivery droids, which could become a frequent addition to the urban landscape in the near future.

Keywords: innovations in transportation technologies, behavioural change and mobility, urban freight logistics, innovative transportation systems

Procedia PDF Downloads 181
3051 Reduction of Fermentation Duration of Cassava to Remove Hydrogen Cyanide

Authors: Jean Paul Hategekimana, Josiane Irakoze, Eugene Niyonzima, Annick Ndekezi

Abstract:

Cassava (Manihot esculenta Crantz) is a root crop comprising an anti-nutritive factor known as cyanide. The compound can be removed by numerous processing methods such as boiling, fermentation, blanching, and sun drying to avoid the possibility of cyanide poisoning. Inappropriate processing mean can lead to disease and death. Cassava-based dishes are consumed in different ways, where cassava is cultivated according to their culture and preference. However, they have been shown to be unsafe based on high cyanide levels. The current study targeted to resolve the problem of high cyanide in cassava consumed in Rwanda. This study was conducted to determine the effect of slicing, blanching, and soaking time to reduce the fermentation duration of cassava for hydrogen cyanide (HCN) in mg/g removal. Cassava was sliced into three different portions (1cm, 2cm, and 5cm). The first portions were naturally fermented for seven days, where each portion was removed every 24 hours from soaking tanks and then oven dried at a temperature of 60°C and then milled to obtain naturally fermented cassava flours. Other portions of 1cm, 2cm, and 5cm were blanched for 2, 5, 10 min, respectively, and each similarly dried at 60°C and milled to produce blanched cassava flour. Other blanched portions were used to follow the previous fermentation steps. The last portions, which formed the control, were simply chopped. Cyanide content and starch content in mg/100g were investigated. According to the conducted analysis on different cassava treatments for detoxification, found that usual fermentation can be used, but for sliced portions aimed to size reduction for the easy hydrogen cyanide diffuse out and it takes four days to complete fermentation, which has reduced at 94.44% with significantly different (p<0.05)of total hydrogen cyanide contained in cassava to safe level of consumption, and what is recommended as more effective is to apply blanching combined with fermentation due to the fact that, it takes three days to complete hydrogen cyanide removal at 95.56% on significantly different (p<0.05) of reduction to the safe level of consumption.

Keywords: cassava, cyanide, blanching, drying, fermentation

Procedia PDF Downloads 44