Search results for: visual information processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14810

Search results for: visual information processing

13940 Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving

Authors: Samane Sharifi Monfared, Lavdie Rada

Abstract:

Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention.

Keywords: hough transform, canny edge detection, optimisation, scaleable masking, camera calibration, improving the quality of image, image processing, video processing

Procedia PDF Downloads 84
13939 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 173
13938 News Reading Practices: Traditional Media versus New Media

Authors: Nuran Öze

Abstract:

People always want to be aware of what is happening around them. The nature of man constantly triggers the need for gathering information because of curiosity. The media has emerged to save people the need for information. It is known that the media has changed with the technological developments over time, diversified and, people's information needs are provided in different ways. Today, the Internet has become an integral part of everyday life. The invasion of the Internet into everyday life practices at this level affects every aspect of life. These effects cause people to change their life practices. Technological developments have always influenced of people, the way they reach information. Looking at the history of the media, the breaking point about the dissemination of information is seen as the invention of the machine of the printing press. This adventure that started with written media has now become a multi-dimensional structure. Written, audio, visual media has now changed shape with new technologies. Especially emerging of the internet to everyday life, of course, has effects on media field. 'New media' has appeared which contains most of traditional media features in its'. While in the one hand this transformation enables captures a harmony between traditional and new media, on the other hand, new media and traditional media are rivaling each other. The purpose of this study is to examine the problematic relationship between traditional media and new media through the news reading practices of individuals. This study can be evaluated as a kind of media sociology. To reach this aim, two different field researches will be done besides literature review. The research will be conducted in Northern Cyprus. Northern Cyprus Northern Cyprus is located in the Mediterranean Sea. North Cyprus is a country which is not recognized by any country except Turkey. Despite this, takes its share from all technological developments take place in the world. One of the field researches will consist of the questionnaires to be applied on media readers' news reading practices. This survey will be conducted in a social media environment. The second field survey will be conducted in the form of interviews with general editorials or news directors in traditional media. In the second field survey, in-depth interview method will be applied. As a result of these investigations, supporting sides between the new media and the traditional media and directions which contrast with each other will be revealed. In addition to that, it will try to understand the attitudes and perceptions of readers about the traditional media and the new media in this study.

Keywords: new media, news, North Cyprus, traditional media

Procedia PDF Downloads 222
13937 Translation Directionality: An Eye Tracking Study

Authors: Elahe Kamari

Abstract:

Research on translation process has been conducted for more than 20 years, investigating various issues and using different research methodologies. Most recently, researchers have started to use eye tracking to study translation processes. They believed that the observable, measurable data that can be gained from eye tracking are indicators of unobservable cognitive processes happening in the translators’ mind during translation tasks. The aim of this study was to investigate directionality in translation processes through using eye tracking. The following hypotheses were tested: 1) processing the target text requires more cognitive effort than processing the source text, in both directions of translation; 2) L2 translation tasks on the whole require more cognitive effort than L1 tasks; 3) cognitive resources allocated to the processing of the source text is higher in L1 translation than in L2 translation; 4) cognitive resources allocated to the processing of the target text is higher in L2 translation than in L1 translation; and 5) in both directions non-professional translators invest more cognitive effort in translation tasks than do professional translators. The performance of a group of 30 male professional translators was compared with that of a group of 30 male non-professional translators. All the participants translated two comparable texts one into their L1 (Persian) and the other into their L2 (English). The eye tracker measured gaze time, average fixation duration, total task length and pupil dilation. These variables are assumed to measure the cognitive effort allocated to the translation task. The data derived from eye tracking only confirmed the first hypothesis. This hypothesis was confirmed by all the relevant indicators: gaze time, average fixation duration and pupil dilation. The second hypothesis that L2 translation tasks requires allocation of more cognitive resources than L1 translation tasks has not been confirmed by all four indicators. The third hypothesis that source text processing requires more cognitive resources in L1 translation than in L2 translation and the fourth hypothesis that target text processing requires more cognitive effort in L2 translation than L1 translation were not confirmed. It seems that source text processing in L2 translation can be just as demanding as in L1 translation. The final hypothesis that non-professional translators allocate more cognitive resources for the same translation tasks than do the professionals was partially confirmed. One of the indicators, average fixation duration, indicated higher cognitive effort-related values for professionals.

Keywords: translation processes, eye tracking, cognitive resources, directionality

Procedia PDF Downloads 456
13936 Towards a Distributed Computation Platform Tailored for Educational Process Discovery and Analysis

Authors: Awatef Hicheur Cairns, Billel Gueni, Hind Hafdi, Christian Joubert, Nasser Khelifa

Abstract:

Given the ever changing needs of the job markets, education and training centers are increasingly held accountable for student success. Therefore, education and training centers have to focus on ways to streamline their offers and educational processes in order to achieve the highest level of quality in curriculum contents and managerial decisions. Educational process mining is an emerging field in the educational data mining (EDM) discipline, concerned with developing methods to discover, analyze and provide a visual representation of complete educational processes. In this paper, we present our distributed computation platform which allows different education centers and institutions to load their data and access to advanced data mining and process mining services. To achieve this, we present also a comparative study of the different clustering techniques developed in the context of process mining to partition efficiently educational traces. Our goal is to find the best strategy for distributing heavy analysis computations on many processing nodes of our platform.

Keywords: educational process mining, distributed process mining, clustering, distributed platform, educational data mining, ProM

Procedia PDF Downloads 449
13935 Economic and Financial Crime, Forensic Accounting and Sustainable Developments Goals (SDGs). Bibliometric Analysis

Authors: Monica Violeta Achim, Sorin Nicolae Borlea

Abstract:

This aim of this work is to stress the needs for enhancing the role of forensic accounting in fighting economic and financial crime, in the context of the new international regulation movements in this area enhanced by the International Federation of Accountants (IFAC). Corruption, money laundering, tax evasion and other frauds significant hamper the economic growth and human development and, ultimately, the UN Sustainable Development Goals. The present paper also stresses the role of good governance in fighting the frauds, in order to achieve the most suitable sustainable development of the society. In this view, we made a bibliometric systematic review on forensic accounting and its contribution towards fraud detection and prevention and theirs relationship with good governance and Sustainable Developments Goals (SDGs). In this view, two powerful bibliometric visual software tools, VosViewer and CiteSpace are used in order to analyze published papers identifies in Scopus and Web of Science databases over the time. Our findings reveal the main red flags identified in literature as used tools by forensic accounting, the evolution in time of the interest of the topic, the distribution in space among world countries and connectivity with patterns of a good governance. Visual designs and scientific maps are useful to show these findings, in a visual way. Our findings are useful for managers and policy makers to provide important avenues that may help in reaching the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, in the area of using forensic accounting in preventing frauds.

Keywords: forensic accounting, frauds, red flags, SDGs

Procedia PDF Downloads 128
13934 A Trends Analysis of Yatch Simulator

Authors: Jae-Neung Lee, Keun-Chang Kwak

Abstract:

This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding.

Keywords: yacht simulator, simulator, trends analysis, SIFT

Procedia PDF Downloads 425
13933 Investigation of Information Security Incident Management Based on International Standard ISO/IEC 27002 in Educational Hospitals in 2014

Authors: Nahid Tavakoli, Asghar Ehteshami, Akbar Hassanzadeh, Fatemeh Amini

Abstract:

Introduction: The Information security incident management guidelines was been developed to help hospitals to meet their information security event and incident management requirements. The purpose of this Study was to investigate on Information Security Incident Management in Isfahan’s educational hospitals in accordance to ISO/IEC 27002 standards. Methods: This was a cross-sectional study to investigate on Information Security Incident Management of educational hospitals in 2014. Based on ISO/IEC 27002 standards, two checklists were applied to check the compliance with standards on Reporting Information Security Events and Weakness and Management of Information Security Incidents and Improvements. One inspector was trained to carry out the assessments in the hospitals. The data was analyzed by SPSS. Findings: In general the score of compliance Information Security Incident Management requirements in two steps; Reporting Information Security Events and Weakness and Management of Information Security Incidents and Improvements was %60. There was the significant difference in various compliance levels among the hospitals (p-valueKeywords: information security incident management, information security management, standards, hospitals

Procedia PDF Downloads 570
13932 Importance of an E-Learning Program in Stress Field for Postgraduate Courses of Doctors

Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau

Abstract:

Background: Preparing in the stress field (SF) is, increasingly, a concern for doctors of different specialties. Aims: The aim was to evaluate the importance of an e-learning program for doctors postgraduate courses, in SF. Methods: Doctors (n= 40 male, 40 female) of different specialties and ages (31-71 years), who attended postgraduate courses in SF, voluntarily responded to a questionnaire that included the following themes: Importance of SF courses for specialty practiced by each respondent doctor (using visual analogue scale, VAS); What SF themes would be indicated as e-learning (EL); Preferred form of SF information assimilation: Classical lectures (CL), EL or a combination of these methods (CL+EL); Which information on the SF course are facilitated by EL model versus CL; In their view which are the first four advantages and the first four disadvantages of EL compared to CL, for SF. Results: To most respondents, the SF courses are important for the specialty they practiced (VAS by an average of 4). The SF themes suggested to be done as EL were: Stress mechanisms; stress factor models for different medical specialties; stress assessment methods; primary stress management methods for different specialties. Preferred form of information assimilation was CL+EL. Aspects of the course facilitated by EL versus CL model: Active reading of theoretical information, with fast access to keywords details; watching documentaries in everyone's favorite order; practice through tests and the rapid control of results. The first four EL advantages, mentioned for SF were: Autonomy in managing the time allocated to the study; saving time for traveling to the venue; the ability to read information in various contexts of time and space; communication with colleagues, in good times for everyone. The first three EL disadvantages, mentioned for SF were: It decreases capabilities for group discussion and mobilization for active participation; EL information accession may depend on electrical source or/and Internet; learning slowdown can appear, by temptation of postponing the implementation. Answering questions was partially influenced by the respondent's age and genre. Conclusions: 1) Post-graduate courses in SF are of interest to doctors of different specialties. 2) The majority of participating doctors preferred EL, but combined with CL (CL+EL). 3) Preference for EL was manifested mainly by young or middle age men doctors. 4) It is important to balance the proper formula for chosen EL, to be the most efficient, interesting, useful and agreeable.

Keywords: stress field, doctors’ postgraduate courses, classical lectures, e-learning lecture

Procedia PDF Downloads 234
13931 Searching Linguistic Synonyms through Parts of Speech Tagging

Authors: Faiza Hussain, Usman Qamar

Abstract:

Synonym-based searching is recognized to be a complicated problem as text mining from unstructured data of web is challenging. Finding useful information which matches user need from bulk of web pages is a cumbersome task. In this paper, a novel and practical synonym retrieval technique is proposed for addressing this problem. For replacement of semantics, user intent is taken into consideration to realize the technique. Parts-of-Speech tagging is applied for pattern generation of the query and a thesaurus for this experiment was formed and used. Comparison with Non-Context Based Searching, Context Based searching proved to be a more efficient approach while dealing with linguistic semantics. This approach is very beneficial in doing intent based searching. Finally, results and future dimensions are presented.

Keywords: natural language processing, text mining, information retrieval, parts-of-speech tagging, grammar, semantics

Procedia PDF Downloads 302
13930 The Implementation of Information Security Audits in Public Sector: Perspective from Indonesia

Authors: Nur Imroatun Sholihat, Gresika Bunga Sylvana

Abstract:

Currently, cyber attack became an incredibly serious problem due to its increasing trend all over the world. Therefore, information security becomes prominent for every organization including public sector organization. In Indonesia, unfortunately, Ministry of Finance (MoF) is the only public sector organization that has already formally established procedure to assess its information security adequacy by performing information security audits (November 2017). We assess the implementation of information security audits in the MoF using qualitative data obtained by interviewing IT auditors and by analysis of related documents. For this reason, information security audit practice in the MoF could become the acceptable benchmark for all other public sector organizations in Indonesia. This study is important because, to the best of the author’s knowledge, our research into information security audits practice in Indonesia’s public sector have not been found yet. Results showed that information security audits performed mostly by doing pentest (penetration testing) to MoF’s critical applications.

Keywords: information security audit, information technology, Ministry of Finance of Indonesia, public sector organization

Procedia PDF Downloads 227
13929 Intelligent Grading System of Apple Using Neural Network Arbitration

Authors: Ebenezer Obaloluwa Olaniyi

Abstract:

In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.

Keywords: image processing, neural network, apple, intelligent system

Procedia PDF Downloads 392
13928 Characterization of Retinal Pigmented Cell Epithelium Cell Sheet Cultivated on Synthetic Scaffold

Authors: Tan Yong Sheng Edgar, Yeong Wai Yee

Abstract:

Age-related macular degeneration (AMD) is one of the leading cause of blindness. It can cause severe visual loss due to damaged retinal pigment epithelium (RPE). RPE is an important component of the retinal tissue. It functions as a transducing boundary for visual perception making it an essential factor for sight. The RPE also functions as a metabolically complex and functional cell layer that is responsible for the local homeostasis and maintenance of the extra photoreceptor environment. Thus one of the suggested method of treating such diseases would be regenerating these RPE cells. As such, we intend to grow these cells using a synthetic scaffold to provide a stable environment that reduces the batch effects found in natural scaffolds. Stiffness of the scaffold will also be investigated to determine the optimal Young’s modulus for cultivating these cells. The cells will be generated into a monolayer cell sheet and their functions such as formation of tight junctions and gene expression patterns will be assessed to evaluate the cell sheet quality compared to a native RPE tissue.

Keywords: RPE, scaffold, characterization, biomaterials, colloids and nanomedicine

Procedia PDF Downloads 428
13927 Identification System for Grading Banana in Food Processing Industry

Authors: Ebenezer O. Olaniyi, Oyebade K. Oyedotun, Khashman Adnan

Abstract:

In the food industry high quality production is required within a limited time to meet up with the demand in the society. In this research work, we have developed a model which can be used to replace the human operator due to their low output in production and slow in making decisions as a result of an individual differences in deciding the defective and healthy banana. This model can perform the vision attributes of human operators in deciding if the banana is defective or healthy for food production based. This research work is divided into two phase, the first phase is the image processing where several image processing techniques such as colour conversion, edge detection, thresholding and morphological operation were employed to extract features for training and testing the network in the second phase. These features extracted in the first phase were used in the second phase; the classification system phase where the multilayer perceptron using backpropagation neural network was employed to train the network. After the network has learned and converges, the network was tested with feedforward neural network to determine the performance of the network. From this experiment, a recognition rate of 97% was obtained and the time taken for this experiment was limited which makes the system accurate for use in the food industry.

Keywords: banana, food processing, identification system, neural network

Procedia PDF Downloads 465
13926 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 140
13925 Nutritional Potential and Functionality of Whey Powder Influenced by Different Processing Temperature and Storage

Authors: Zarmina Gillani, Nuzhat Huma, Aysha Sameen, Mulazim Hussain Bukhari

Abstract:

Whey is an excellent food ingredient owing to its high nutritive value and its functional properties. However, composition of whey varies depending on composition of milk, processing conditions, processing method, and its whey protein content. The aim of this study was to prepare a whey powder from raw whey and to determine the influence of different processing temperatures (160 and 180 °C) on the physicochemical, functional properties during storage of 180 days and on whey protein denaturation. Results have shown that temperature significantly (P < 0.05) affects the pH, acidity, non-protein nitrogen (NPN), protein total soluble solids, fat and lactose contents. Significantly (p < 0.05) higher foaming capacity (FC), foam stability (FS), whey protein nitrogen index (WPNI), and a lower turbidity and solubility index (SI) were observed in whey powder processed at 160 °C compared to whey powder processed at 180 °C. During storage of 180 days, slow but progressive changes were noticed on the physicochemical and functional properties of whey powder. Reverse phase-HPLC analysis revealed a significant (P < 0.05) effect of temperature on whey protein contents. Denaturation of β-Lactoglobulin is followed by α-lacalbumin, casein glycomacropeptide (CMP/GMP), and bovine serum albumin (BSA).

Keywords: whey powder, temperature, denaturation, reverse phase, HPLC

Procedia PDF Downloads 292
13924 Tamper Resistance Evaluation Tests with Noise Resources

Authors: Masaya Yoshikawa, Toshiya Asai, Ryoma Matsuhisa, Yusuke Nozaki, Kensaku Asahi

Abstract:

Recently, side-channel attacks, which estimate secret keys using side-channel information such as power consumption and compromising emanations of cryptography circuits embedded in hardware, have become a serious problem. In particular, electromagnetic analysis attacks against cryptographic circuits between information processing and electromagnetic fields, which are related to secret keys in cryptography circuits, are the most threatening side-channel attacks. Therefore, it is important to evaluate tamper resistance against electromagnetic analysis attacks for cryptography circuits. The present study performs basic examination of the tamper resistance of cryptography circuits using electromagnetic analysis attacks with noise resources.

Keywords: tamper resistance, cryptographic circuit, hardware security evaluation, noise resources

Procedia PDF Downloads 495
13923 Predictive Modelling Approaches in Food Processing and Safety

Authors: Amandeep Sharma, Digvaijay Verma, Ruplal Choudhary

Abstract:

Food processing is an activity across the globe that help in better handling of agricultural produce, including dairy, meat, and fish. The operations carried out in the food industry includes raw material quality authenticity; sorting and grading; processing into various products using thermal treatments – heating, freezing, and chilling; packaging; and storage at the appropriate temperature to maximize the shelf life of the products. All this is done to safeguard the food products and to ensure the distribution up to the consumer. The approaches to develop predictive models based on mathematical or statistical tools or empirical models’ development has been reported for various milk processing activities, including plant maintenance and wastage. Recently AI is the key factor for the fourth industrial revolution. AI plays a vital role in the food industry, not only in quality and food security but also in different areas such as manufacturing, packaging, and cleaning. A new conceptual model was developed, which shows that smaller sample size as only spectra would be required to predict the other values hence leads to saving on raw materials and chemicals otherwise used for experimentation during the research and new product development activity. It would be a futuristic approach if these tools can be further clubbed with the mobile phones through some software development for their real time application in the field for quality check and traceability of the product.

Keywords: predictive modlleing, ann, ai, food

Procedia PDF Downloads 78
13922 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 194
13921 Challenges and Opportunities: One Stop Processing for the Automation of Indonesian Large-Scale Topographic Base Map Using Airborne LiDAR Data

Authors: Elyta Widyaningrum

Abstract:

The LiDAR data acquisition has been recognizable as one of the fastest solution to provide the basis data for topographic base mapping in Indonesia. The challenges to accelerate the provision of large-scale topographic base maps as a development plan basis gives the opportunity to implement the automated scheme in the map production process. The one stop processing will also contribute to accelerate the map provision especially to conform with the Indonesian fundamental spatial data catalog derived from ISO 19110 and geospatial database integration. Thus, the automated LiDAR classification, DTM generation and feature extraction will be conducted in one GIS-software environment to form all layers of topographic base maps. The quality of automated topographic base map will be assessed and analyzed based on its completeness, correctness, contiguity, consistency and possible customization.

Keywords: automation, GIS environment, LiDAR processing, map quality

Procedia PDF Downloads 359
13920 High Efficient Biohydrogen Production from Cassava Starch Processing Wastewater by Two Stage Thermophilic Fermentation and Electrohydrogenesis

Authors: Peerawat Khongkliang, Prawit Kongjan, Tsuyoshi Imai, Poonsuk Prasertsan, Sompong O-Thong

Abstract:

A two-stage thermophilic fermentation and electrohydrogenesis process was used to convert cassava starch processing wastewater into hydrogen gas. Maximum hydrogen yield from fermentation stage by Thermoanaerobacterium thermosaccharolyticum PSU-2 was 248 mL H2/g-COD at optimal pH of 6.5. Optimum hydrogen production rate of 820 mL/L/d and yield of 200 mL/g COD was obtained at HRT of 2 days in fermentation stage. Cassava starch processing wastewater fermentation effluent consisted of acetic acid, butyric acid and propionic acid. The effluent from fermentation stage was used as feedstock to generate hydrogen production by microbial electrolysis cell (MECs) at an applied voltage of 0.6 V in second stage with additional 657 mL H2/g-COD was produced. Energy efficiencies based on electricity needed for the MEC were 330 % with COD removals of 95 %. The overall hydrogen yield was 800-900 mL H2/g-COD. Microbial community analysis of electrohydrogenesis by DGGE shows that exoelectrogens belong to Acidiphilium sp., Geobacter sulfurreducens and Thermincola sp. were dominated at anode. These results show two-stage thermophilic fermentation, and electrohydrogenesis process improved hydrogen production performance with high hydrogen yields, high gas production rates and high COD removal efficiency.

Keywords: cassava starch processing wastewater, biohydrogen, thermophilic fermentation, microbial electrolysis cell

Procedia PDF Downloads 335
13919 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India

Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader

Abstract:

Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.

Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation

Procedia PDF Downloads 134
13918 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations

Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha

Abstract:

This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.

Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation

Procedia PDF Downloads 132
13917 A Secure System for Handling Information from Heterogeous Sources

Authors: Shoohira Aftab, Hammad Afzal

Abstract:

Information integration is a well known procedure to provide consolidated view on sets of heterogeneous information sources. It not only provides better statistical analysis of information but also facilitates users to query without any knowledge on the underlying heterogeneous information sources The problem of providing a consolidated view of information can be handled using Semantic data (information stored in such a way that is understandable by machines and integrate-able without manual human intervention). However, integrating information using semantic web technology without any access management enforced, will results in increase of privacy and confidentiality concerns. In this research we have designed and developed a framework that would allow information from heterogeneous formats to be consolidated, thus resolving the issue of interoperability. We have also devised an access control system for defining explicit privacy constraints. We designed and applied our framework on both semantic and non-semantic data from heterogeneous resources. Our approach is validated using scenario based testing.

Keywords: information integration, semantic data, interoperability, security, access control system

Procedia PDF Downloads 344
13916 Modernization of the Economic Price Adjustment Software

Authors: Roger L. Goodwin

Abstract:

The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. In mid to late 1990, much research went into changes to the CPI by a Congressional Advisory Committee. One thing can be said from the research is that, aside from there are alternative estimators for the CPI; any fundamental change to the CPI will affect many government programs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for long-term contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.

Keywords: Consumer Price Index, Economic Price Adjustment, contracts, visualization tools, database, reports, forms, event procedures

Procedia PDF Downloads 312
13915 Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays

Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev

Abstract:

In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.

Keywords: antenna array, signal detection, ToA, AoA estimation

Procedia PDF Downloads 487
13914 Lab Bench for Synthetic Aperture Radar Imaging System

Authors: Karthiyayini Nagarajan, P. V. Ramakrishna

Abstract:

Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar (SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System (Lab Bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.

Keywords: synthetic aperture radar, radio reflection model, lab bench, imaging engineering

Procedia PDF Downloads 487
13913 Motor Controller Implementation Using Model Based Design

Authors: Cau Tran, Tu Nguyen, Tien Pham

Abstract:

Model-based design (MBD) is a mathematical and visual technique for addressing design issues in the fields of communications, signal processing, and complicated control systems. It is utilized in several automotive, aerospace, industrial, and motion control applications. Virtual models are at the center of the software development process with model based design. A method used in the creation of embedded software is model-based design. In this study, the LAT motor is modeled in a simulation environment, and the LAT motor control is designed with a cascade structure, a speed and current control loop, and a controller that is used in the next part. A PID structure serves as this controller. Based on techniques and motor parameters that match the design goals, the PID controller is created for the model using traditional design principles. The MBD approach will be used to build embedded software for motor control. The paper will be divided into three distinct sections. The first section will introduce the design process and the benefits and drawbacks of the MBD technique. The design of control software for LAT motors will be the main topic of the next section. The experiment's results are the subject of the last section.

Keywords: model based design, limited angle torque, intellectual property core, hardware description language, controller area network, user datagram protocol

Procedia PDF Downloads 92
13912 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 269
13911 Effect of Two Different Method for Juice Processing on the Anthocyanins and Polyphenolics of Blueberry (Vaccinium corymbosum)

Authors: Onur Ercan, Buket Askin, Erdogan Kucukoner

Abstract:

Blueberry (Vaccinium corymbosum, bluegold) has become popular beverage due to their nutritional values such as vitamins, minerals, and antioxidants. In the study, the effects of pressing, mashing, enzymatic treatment, and pasteurization on anthocyanins, colour, and polyphenolics of blueberry juice (BJ) were studied. The blueberry juice (BJ) was produced with two different methods that direct juice extraction (DJE) and mash treatment process (MTP) were applied. After crude blueberry juice (CBJ) production, the samples were first treated with commercial enzymes [Novoferm-61 (Novozymes A/S) (2–10 mL/L)], to break down the hydrocolloid polysaccharides, mainly pectin and starch. The enzymes were added at various concentrations. The highest transmittance (%) was obtained for Novoferm-61 at a concentration of 2 mL/L was 66.53%. After enzymatic treatment, clarification trials were applied to the enzymatically treated BJs with adding various amounts of bentonite (10%, w/v), gelatin (1%, w/v) and kiselsol (15%, v/v). The turbidities of the clarified samples were then determined. However, there was no significant differences between transmittances (%) for samples. For that, only enzymatic treatment was applied to the blueberry juice processing (DDBJ, depectinized direct blueberry juice). Based on initial pressing process made to evaluate press function, it was determined that pressing fresh blueberries with no other processing did not render adequate juice due to lack of liquefaction. Therefore, the blueberries were mash into small pieces (3 mm) and then enzymatic treatments and clarification trials were performed. Finally, both BJ samples were pasteurized. Compositional analyses, colour properties, polyphenols and antioxidant properties were compared. Enzymatic treatment caused significant reductions in ACN content (30%) in Direct Blueberry Juice Processing (DBJ), while there was a significant increasing in Mash Treatment Processing (MTP). Overall anthocyanin levels were higher intreated samples after each processing step in MTP samples, but polyphenolic levels were slightly higher for both processes (DBJ and MTP). There was a reduction for ACNs and polyphenolics only after pasteurization. It has a result that the methods for tried to blueberry juice is suitable into obtain fresh juice. In addition, we examined fruit juice during processing stages; anthocyanin, phenolic substance content and antioxidant activity are higher, and yield is higher in fruit juice compared to DBJ method in MTP method, the MTP method should be preferred in processing juice of blueberry into fruit juice.

Keywords: anthocyanins, blueberry, depectinization, polyphenols

Procedia PDF Downloads 84