Search results for: Computer memory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1751

Search results for: Computer memory

1361 An Embedded System Design for SRAM SEU Test

Authors: Kyoung Kun Lee, Soongyu Kwon, Jong Tae Kim

Abstract:

An embedded system for SEU(single event upset) test needs to be designed to prevent system failure by high-energy particles during measuring SEU. SEU is a phenomenon in which the data is changed temporary in semiconductor device caused by high-energy particles. In this paper, we present an embedded system for SRAM(static random access memory) SEU test. SRAMs are on the DUT(device under test) and it is separated from control board which manages the DUT and measures the occurrence of SEU. It needs to have considerations for preventing system failure while managing the DUT and making an accurate measurement of SEUs. We measure the occurrence of SEUs from five different SRAMs at three different cyclotron beam energies 30, 35, and 40MeV. The number of SEUs of SRAMs ranges from 3.75 to 261.00 in average.

Keywords: embedded system, single event upset, SRAM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
1360 The Influence of the Intellectual Capital on the Firms’ Market Value: A Study of Listed Firms in the Tehran Stock Exchange (TSE)

Authors: Bita Mashayekhi, Seyed Meisam Tabatabaie Nasab

Abstract:

Intellectual capital is one of the most valuable and important parts of the intangible assets of enterprises especially in knowledge-based enterprises. With respect to increasing gap between the market value and the book value of the companies, intellectual capital is one of the components that can be placed in this gap. This paper uses the value added efficiency of the three components, capital employed, human capital and structural capital, to measure the intellectual capital efficiency of Iranian industries groups, listed in the Tehran Stock Exchange (TSE), using a 8 years period data set from 2005 to 2012. In order to analyze the effect of intellectual capital on the market-to-book value ratio of the companies, the data set was divided into 10 industries, Banking, Pharmaceutical, Metals & Mineral Nonmetallic, Food, Computer, Building, Investments, Chemical, Cement and Automotive, and the panel data method was applied to estimating pooled OLS. The results exhibited that value added of capital employed has a positive significant relation with increasing market value in the industries, Banking, Metals & Mineral Nonmetallic, Food, Computer, Chemical and Cement, and also, showed that value added efficiency of structural capital has a positive significant relation with increasing market value in the Banking, Pharmaceutical and Computer industries groups. The results of the value added showed a negative relation with the Banking and Pharmaceutical industries groups and a positive relation with computer and Automotive industries groups. Among the studied industries, computer industry has placed the widest gap between the market value and book value in its intellectual capital.

Keywords: Capital Employed, Human Capital, Intellectual Capital, Market-to-Book Value, Structural Capital, Value Added Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
1359 Web Log Mining by an Improved AprioriAll Algorithm

Authors: Wang Tong, He Pi-lian

Abstract:

This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.

Keywords: Candidate Sets Pruning, Data Mining, ImprovedAlgorithm, Noise Restrain, Web Log

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249
1358 High Wire Act: the Perils, Pitfalls and Possibilities of Online Discussions

Authors: Karen Armstrong

Abstract:

Online discussions are an important component of both blended and online courses. This paper examines the varieties of online discussions and the perils, pitfalls and possibilities of this rather new technological tool for enhanced learning. The discussion begins with possible perils and pitfalls inherent in this educational tool and moves to a consideration of the advantages of the varieties of online discussions feasible for use in teacher education programs.

Keywords: online discussions, computer-mediatedcommunication (CMC), computer-supported collaborative learning(CSCL), e-learning, teacher education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564
1357 Corporate Information System Educational Center

Authors: Alquliyev R.M., Kazimov T.H., Mahmudova Sh.C., Mahmudova R.Sh.

Abstract:

The given work is devoted to the description of Information Technologies NAS of Azerbaijan created and successfully maintained in Institute. On the basis of the decision of board of the Supreme Certifying commission at the President of the Azerbaijan Republic and Presidium of National Academy of Sciences of the Azerbaijan Republic, the organization of training courses on Computer Sciences for all post-graduate students and dissertators of the republic, taking of examinations of candidate minima, it was on-line entrusted to Institute of Information Technologies of the National Academy of Sciences of Azerbaijan. Therefore, teaching the computer sciences to post-graduate students and dissertators a scientific - methodological manual on effective application of new information technologies for research works by post-graduate students and dissertators and taking of candidate minima is carried out in the Educational Center. Information and communication technologies offer new opportunities and prospects of their application for teaching and training. The new level of literacy demands creation of essentially new technology of obtaining of scientific knowledge. Methods of training and development, social and professional requirements, globalization of the communicative economic and political projects connected with construction of a new society, depends on a level of application of information and communication technologies in the educational process. Computer technologies develop ideas of programmed training, open completely new, not investigated technological ways of training connected to unique opportunities of modern computers and telecommunications. Computer technologies of training are processes of preparation and transfer of the information to the trainee by means of computer. Scientific and technical progress as well as global spread of the technologies created in the most developed countries of the world is the main proof of the leading role of education in XXI century. Information society needs individuals having modern knowledge. In practice, all technologies, using special technical information means (computer, audio, video) are called information technologies of education.

Keywords: Educational Center, post-graduate, database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
1356 A Practical Distributed String Matching Algorithm Architecture and Implementation

Authors: Bi Kun, Gu Nai-jie, Tu Kun, Liu Xiao-hu, Liu Gang

Abstract:

Traditional parallel single string matching algorithms are always based on PRAM computation model. Those algorithms concentrate on the cost optimal design and the theoretical speed. Based on the distributed string matching algorithm proposed by CHEN, a practical distributed string matching algorithm architecture is proposed in this paper. And also an improved single string matching algorithm based on a variant Boyer-Moore algorithm is presented. We implement our algorithm on the above architecture and the experiments prove that it is really practical and efficient on distributed memory machine. Its computation complexity is O(n/p + m), where n is the length of the text, and m is the length of the pattern, and p is the number of the processors.

Keywords: Boyer-Moore algorithm, distributed algorithm, parallel string matching, string matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2156
1355 Momentum and Heat Transfer in the Flow of a Viscoelastic Fluid Past a Porous Flat Plate Subject to Suction or Blowing

Authors: Motahar Reza, Anadi Sankar Gupta

Abstract:

An analysis is made of the flow of an incompressible viscoelastic fluid (of small memory) over a porous plate subject to suction or blowing. It is found that velocity at a point increases with increase in the elasticity in the fluid. It is also shown that wall shear stress depends only on suction and is also independent of the material of fluids. No steady solution for velocity distribution exists when there is blowing at the plate. Temperature distribution in the boundary layer is determined and it is found that temperature at a point decreases with increase in the elasticity in the fluid.

Keywords: Viscoelastic fluid, Flow past a porous plate, Heat transfer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
1354 Design of a Computer Vision Based Exercise Video Game for Senior Citizens

Authors: June Tay, Ivy Chia

Abstract:

There are numerous changes, both mental and physical, taking place when people age. We need to understand the different aspects required for healthy living, including meeting nutritional needs, regular physical activities to keep agility, sufficient rest and sleep to have physical and mental well-being, social engagement to avoid the risk of social isolation and depression, and access to healthcare to detect and manage chronic conditions. Promoting physical activities for an ageing population is necessary as many may have enjoyed sedentary lifestyles for some time. In our study, we evaluate the considerations when designing a computer vision video game for the elderly. We need to design some low-impact activities, such as stretching and gentle movements, because some elderly individuals may have joint pains or mobility issues. The exercise game should consist of simple movements that are easy to follow and remember. It should be fun and enjoyable so that they can be motivated to do some exercise. Social engagement can keep the elderly motivated and competitive, and they are more willing to engage in game exercises. Elderly citizens can compare their game scores and try to improve them. We propose a computer vision-based video game for the elderly that will capture and track the movement of the elderly hand pushing a ball on the screen into a circle. It can be easily set up using a PC laptop with a webcam. Our video game adhered to the design framework we employed, and it encompassed ease of use, a simple graphical interface, easy-to-play game exercise, and fun gameplay.

Keywords: Computer vision, video games, gerontology technology, caregiving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120
1353 AI Tutor: A Computer Science Domain Knowledge Graph-Based QA System on JADE platform

Authors: Yingqi Cui, Changran Huang, Raymond Lee

Abstract:

In this paper, we proposed an AI Tutor using ontology and natural language process techniques to generate a computer science domain knowledge graph and answer users’ questions based on the knowledge graph. We define eight types of relation to extract relationships between entities according to the computer science domain text. The AI tutor is separated into two agents: learning agent and Question-Answer (QA) agent and developed on JADE (a multi-agent system) platform. The learning agent is responsible for reading text to extract information and generate a corresponding knowledge graph by defined patterns. The QA agent can understand the users’ questions and answer humans’ questions based on the knowledge graph generated by the learning agent.

Keywords: Artificial intelligence, natural language process, knowledge graph, agent, QA system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
1352 Public Economic Efficiency and Case-Based Reasoning: A Theoretical Framework to Police Performance

Authors: Javier Parra-Domínguez, Juan Manuel Corchado

Abstract:

At present, public efficiency is a concept that intends to maximize return on public investment focus on minimizing the use of resources and maximizing the outputs. The concept takes into account statistical criteria drawn up according to techniques such as DEA (Data Envelopment Analysis). The purpose of the current work is to consider, more precisely, the theoretical application of CBR (Case-Based Reasoning) from economics and computer science, as a preliminary step to improving the efficiency of law enforcement agencies (public sector). With the aim of increasing the efficiency of the public sector, we have entered into a phase whose main objective is the implementation of new technologies. Our main conclusion is that the application of computer techniques, such as CBR, has become key to the efficiency of the public sector, which continues to require economic valuation based on methodologies such as DEA. As a theoretical result and conclusion, the incorporation of CBR systems will reduce the number of inputs and increase, theoretically, the number of outputs generated based on previous computer knowledge.

Keywords: Case-based reasoning, knowledge, police, public efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 562
1351 Plant Layout Analysis by Computer Simulation for Electronic Manufacturing Service Plant

Authors: Visuwan D., Phruksaphanrat B

Abstract:

In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyze and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyze and compare the performance of the proposed cellular layout and the current layout. It found that the proposed cellular layout can generate better performances than the current layout. In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyze and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyze and compare the performance of the proposed cellular layout and the current layout. It found that the proposed cellular layout can generate better performances than the current layout. 

Keywords: Layout, Electronic Manufacturing Service Plant (EMS), Computer Simulation, Cellular Manufacturing System (CMS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3421
1350 Weka Based Desktop Data Mining as Web Service

Authors: Sujala.D.Shetty, S.Vadivel, Sakshi Vaghella

Abstract:

Data mining is the process of sifting through large volumes of data, analyzing data from different perspectives and summarizing it into useful information. One of the widely used desktop applications for data mining is the Weka tool which is nothing but a collection of machine learning algorithms implemented in Java and open sourced under the General Public License (GPL). A web service is a software system designed to support interoperable machine to machine interaction over a network using SOAP messages. Unlike a desktop application, a web service is easy to upgrade, deliver and access and does not occupy any memory on the system. Keeping in mind the advantages of a web service over a desktop application, in this paper we are demonstrating how this Java based desktop data mining application can be implemented as a web service to support data mining across the internet.

Keywords: desktop application, Weka mining, web service

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4049
1349 Content-Based Color Image Retrieval Based On 2-D Histogram and Statistical Moments

Authors: Khalid Elasnaoui, Brahim Aksasse, Mohammed Ouanan

Abstract:

In this paper, we are interested in the problem of finding similar images in a large database. For this purpose we propose a new algorithm based on a combination of the 2-D histogram intersection in the HSV space and statistical moments. The proposed histogram is based on a 3x3 window and not only on the intensity of the pixel. This approach overcome the drawback of the conventional 1-D histogram which is ignoring the spatial distribution of pixels in the image, while the statistical moments are used to escape the effects of the discretisation of the color space which is intrinsic to the use of histograms. We compare the performance of our new algorithm to various methods of the state of the art and we show that it has several advantages. It is fast, consumes little memory and requires no learning. To validate our results, we apply this algorithm to search for similar images in different image databases.

Keywords: 2-D histogram, Statistical moments, Indexing, Similarity distance, Histograms intersection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
1348 Improving Quality of Business Networks for Information Systems

Authors: Hazem M. El-Bakry, Ahmed Atwan

Abstract:

Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Usability Criteria, Computer Networks, Fast Information Processing, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
1347 Design of Wireless Readout System for Resonant Gas Sensors

Authors: S. Mohamed Rabeek, Mi Kyoung Park, M. Annamalai Arasu

Abstract:

This paper presents a design of a wireless read out system for tracking the frequency shift of the polymer coated piezoelectric micro electromechanical resonator due to gas absorption. The measure of this frequency shift indicates the percentage of a particular gas the sensor is exposed to. It is measured using an oscillator and an FPGA based frequency counter by employing the resonator as a frequency determining element in the oscillator. This system consists of a Gas Sensing Wireless Readout (GSWR) and an USB Wireless Transceiver (UWT). GSWR consists of an oscillator based on a trans-impedance sustaining amplifier, an FPGA based frequency readout, a sub 1GHz wireless transceiver and a micro controller. UWT can be plugged into the computer via USB port and function as a wireless module to transfer gas sensor data from GSWR to the computer through its USB port. GUI program running on the computer periodically polls for sensor data through UWT - GSWR wireless link, the response from GSWR is logged in a file for post processing as well as displayed on screen.

Keywords: Gas sensor, GSWR, micro-mechanical system, UWT, volatile emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
1346 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods

Authors: C. Kalamani, K. Paramasivam

Abstract:

In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.

Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
1345 Performance Evaluation of Popular Hash Functions

Authors: Sheena Mathew, K. Poulose Jacob

Abstract:

This paper describes the results of an extensive study and comparison of popular hash functions SHA-1, SHA-256, RIPEMD-160 and RIPEMD-320 with JERIM-320, a 320-bit hash function. The compression functions of hash functions like SHA-1 and SHA-256 are designed using serial successive iteration whereas those like RIPEMD-160 and RIPEMD-320 are designed using two parallel lines of message processing. JERIM-320 uses four parallel lines of message processing resulting in higher level of security than other hash functions at comparable speed and memory requirement. The performance evaluation of these methods has been done by using practical implementation and also by using step computation methods. JERIM-320 proves to be secure and ensures the integrity of messages at a higher degree. The focus of this work is to establish JERIM-320 as an alternative of the present day hash functions for the fast growing internet applications.

Keywords: Cryptography, Hash function, JERIM-320, Messageintegrity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2611
1344 SMART: Solution Methods with Ants Running by Types

Authors: Nicolas Zufferey

Abstract:

Ant algorithms are well-known metaheuristics which have been widely used since two decades. In most of the literature, an ant is a constructive heuristic able to build a solution from scratch. However, other types of ant algorithms have recently emerged: the discussion is thus not limited by the common framework of the constructive ant algorithms. Generally, at each generation of an ant algorithm, each ant builds a solution step by step by adding an element to it. Each choice is based on the greedy force (also called the visibility, the short term profit or the heuristic information) and the trail system (central memory which collects historical information of the search process). Usually, all the ants of the population have the same characteristics and behaviors. In contrast in this paper, a new type of ant metaheuristic is proposed, namely SMART (for Solution Methods with Ants Running by Types). It relies on the use of different population of ants, where each population has its own personality.

Keywords: Optimization, Metaheuristics, Ant Algorithms, Evolutionary Procedures, Population-Based Methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
1343 Grid-based Supervised Clustering - GBSC

Authors: Pornpimol Bungkomkhun, Surapong Auwatanamongkol

Abstract:

This paper presents a supervised clustering algorithm, namely Grid-Based Supervised Clustering (GBSC), which is able to identify clusters of any shapes and sizes without presuming any canonical form for data distribution. The GBSC needs no prespecified number of clusters, is insensitive to the order of the input data objects, and is capable of handling outliers. Built on the combination of grid-based clustering and density-based clustering, under the assistance of the downward closure property of density used in bottom-up subspace clustering, the GBSC can notably reduce its search space to avoid the memory confinement situation during its execution. On two-dimension synthetic datasets, the GBSC can identify clusters with different shapes and sizes correctly. The GBSC also outperforms other five supervised clustering algorithms when the experiments are performed on some UCI datasets.

Keywords: supervised clustering, grid-based clustering, subspace clustering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
1342 A Recommender System Fusing Collaborative Filtering and User’s Review Mining

Authors: Seulbi Choi, Hyunchul Ahn

Abstract:

Collaborative filtering (CF) algorithm has been popularly used for recommender systems in both academic and practical applications. It basically generates recommendation results using users’ numeric ratings. However, the additional use of the information other than user ratings may lead to better accuracy of CF. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's review can be regarded as the new informative source for identifying user's preference with accuracy. Under this background, this study presents a hybrid recommender system that fuses CF and user's review mining. Our system adopts conventional memory-based CF, but it is designed to use both user’s numeric ratings and his/her text reviews on the items when calculating similarities between users.

Keywords: Recommender system, collaborative filtering, text mining, review mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
1341 Remarks Regarding Queuing Model and Packet Loss Probability for the Traffic with Self-Similar Characteristics

Authors: Mihails Kulikovs, Ernests Petersons

Abstract:

Network management techniques have long been of interest to the networking research community. The queue size plays a critical role for the network performance. The adequate size of the queue maintains Quality of Service (QoS) requirements within limited network capacity for as many users as possible. The appropriate estimation of the queuing model parameters is crucial for both initial size estimation and during the process of resource allocation. The accurate resource allocation model for the management system increases the network utilization. The present paper demonstrates the results of empirical observation of memory allocation for packet-based services.

Keywords: Queuing System, Packet Loss Probability, Measurement-Based Admission Control (MBAC), Performanceevaluation, Quality of Service (QoS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
1340 Comanche – A Compiler-Driven I/O Management System

Authors: Wendy Zhang, Ernst L. Leiss, Huilin Ye

Abstract:

Most scientific programs have large input and output data sets that require out-of-core programming or use virtual memory management (VMM). Out-of-core programming is very error-prone and tedious; as a result, it is generally avoided. However, in many instance, VMM is not an effective approach because it often results in substantial performance reduction. In contrast, compiler driven I/O management will allow a program-s data sets to be retrieved in parts, called blocks or tiles. Comanche (COmpiler MANaged caCHE) is a compiler combined with a user level runtime system that can be used to replace standard VMM for out-of-core programs. We describe Comanche and demonstrate on a number of representative problems that it substantially out-performs VMM. Significantly our system does not require any special services from the operating system and does not require modification of the operating system kernel.

Keywords: I/O Management, Out-of-core, Compiler, Tile mapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
1339 Understanding ICT Behaviors among Health Workers in Sub-Saharan Africa: A Cross-Sectional Study for Laboratory Persons in Uganda

Authors: M. Kasusse, M. Rosette, E. Burke, C. Mwangi, R. Batamwita, N. Tumwesigye, S. Aisu

Abstract:

A cross-sectional survey to ascertain the capacity of laboratory persons in using ICTs was conducted in 15 Ugandan districts (July-August 2013). A self-administered questionnaire served as data collection tool, interview guide and observation checklist. 69 questionnaires were filled, 12 interviews conducted, 45 HC observed. SPSS statistics 17.0 and SAS 9.2 software were used for entry and analyses. 69.35% of participants find it difficult to access a computer at work. Of the 30.65% who find it easy to access a computer at work, a significant 21.05% spend 0 hours on a computer daily. 60% of the participants cannot access internet at work. Of the 40% who have internet at work, a significant 20% lack email address but 20% weekly read emails weekly and 48% daily. It is viable/feasible to pilot informatics projects as strategies to build bridges develop skills for e-health landscape in laboratory services with a bigger financial muscle.

Keywords: ICT Behavior, Clinical Laboratory persons, Sub-Saharan Africa, Uganda.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
1338 Data Traffic Dynamics and Saturation on a Single Link

Authors: Reginald D. Smith

Abstract:

The dynamics of User Datagram Protocol (UDP) traffic over Ethernet between two computers are analyzed using nonlinear dynamics which shows that there are two clear regimes in the data flow: free flow and saturated. The two most important variables affecting this are the packet size and packet flow rate. However, this transition is due to a transcritical bifurcation rather than phase transition in models such as in vehicle traffic or theorized large-scale computer network congestion. It is hoped this model will help lay the groundwork for further research on the dynamics of networks, especially computer networks.

Keywords: congestion, packet flow, Internet, traffic dynamics, transcritical bifurcation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
1337 Recycling-Oriented Product Assessment during Design Process with Usage of Agent Technology

Authors: Ewa Dostatni, Jacek Diakun, Damian Grajewski, Radosław Wichniarek, Anna Karwasz

Abstract:

In the paper the method of product analysis from recycling point of view has been described. The analysis bases on set of measures that assess a product from the point of view of final stages of its lifecycle. It was assumed that such analysis will be performed at the design phase – in order to conduct such analysis the computer system that aids the designer during the design process has been developed. The structure of the computer tool, based on agent technology, and example results has been also included in the paper.

Keywords: Agent technology, PLM, design for environment, ecodesign.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
1336 Harrison’s Stolen: Addressing Aboriginal and Indigenous Islanders Human Rights

Authors: M. Shukry

Abstract:

According to the United Nations Declaration of Human Rights in 1948, every human being is entitled to rights in life that should be respected by others and protected by the state and community. Such rights are inherent regardless of colour, ethnicity, gender, religion or otherwise, and it is expected that all humans alike have the right to live without discrimination of any sort. However, that has not been the case with Aborigines in Australia. Over a long period of time, the governments of the State and the Territories and the Australian Commonwealth denied the Aboriginal and Indigenous inhabitants of the Torres Strait Islands such rights. Past Australian governments set policies and laws that enabled them to forcefully remove Indigenous children from their parents, which resulted in creating lost generations living the trauma of the loss of cultural identity, alienation and even their own selfhood. Intending to reduce that population of natives and their Aboriginal culture while, on the other hand, assimilate them into mainstream society, they gave themselves the right to remove them from their families with no hope of return. That practice has led to tragic consequences due to the trauma that has affected those children, an experience that is depicted by Jane Harrison in her play Stolen. The drama is the outcome of a six-year project on lost children and which was first performed in 1997 in Melbourne. Five actors only appear on the stage, playing the role of all the different characters, whether the main protagonists or the remaining cast, present or non-present ones as voices. The play outlines the life of five children who have been taken from their parents at an early age, entailing a disastrous negative impact that differs from one to the other. Unknown to each other, what connects between them is being put in a children’s home. The purpose of this paper is to analyse the play’s text in light of the 1948 Declaration of Human Rights, using it as a lens that reflects the atrocities practiced against the Aborigines. It highlights how such practices formed an outrageous violation of those natives’ rights as human beings. Harrison’s dramatic technique in conveying the children’s experiences is through a non-linear structure, fluctuating between past and present that are linked together within each of the five characters, reflecting their suffering and pain to create an emotional link between them and the audience. Her dramatic handling of the issue by fusing tragedy with humour as well as symbolism is a successful technique in revealing the traumatic memory of those children and their present life. The play has made a difference in commencing to address the problem of the right of all children to be with their families, which renders the real meaning of having a home and an identity as people.

Keywords: Aboriginal, audience, Australia, children, culture, drama, home, human rights, identity, indigenous, Jane Harrison, memory, scenic effects, setting, stage, stage directions, Stolen, trauma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
1335 Real-Time Image Analysis of Capsule Endoscopy for Bleeding Discrimination in Embedded System Platform

Authors: Yong-Gyu Lee, Gilwon Yoon

Abstract:

Image processing for capsule endoscopy requires large memory and it takes hours for diagnosis since operation time is normally more than 8 hours. A real-time analysis algorithm of capsule images can be clinically very useful. It can differentiate abnormal tissue from health structure and provide with correlation information among the images. Bleeding is our interest in this regard and we propose a method of detecting frames with potential bleeding in real-time. Our detection algorithm is based on statistical analysis and the shapes of bleeding spots. We tested our algorithm with 30 cases of capsule endoscopy in the digestive track. Results were excellent where a sensitivity of 99% and a specificity of 97% were achieved in detecting the image frames with bleeding spots.

Keywords: bleeding, capsule endoscopy, image processing, real time analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
1334 The use of a Bespoke Computer Game For Teaching Analogue Electronics

Authors: Olaf Hallan Graven, Dag Andreas Hals Samuelsen

Abstract:

An implementation of a design for a game based virtual learning environment is described. The game is developed for a course in analogue electronics, and the topic is the design of a power supply. This task can be solved in a number of different ways, with certain constraints, giving the students a certain amount of freedom, although the game is designed not to facilitate trial-and error approach. The use of storytelling and a virtual gaming environment provides the student with the learning material in a MMORPG environment. The game is tested on a group of second year electrical engineering students with good results.

Keywords: analogue electronics, e-learning, computer games for learning, virtual reality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
1333 Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment

Authors: M. Bohlouli, M. Analoui

Abstract:

For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.

Keywords: Active Database, Grid Computing, ResourceRequirement Prediction, Scheduling,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
1332 The Use of Software and Internet Search Engines to Develop the Encoding and Decoding Skills of a Dyslexic Learner: A Case Study

Authors: Rabih Joseph Nabhan

Abstract:

This case study explores the impact of two major computer software programs Learn to Speak English and Learn English Spelling and Pronunciation, and some Internet search engines such as Google on mending the decoding and spelling deficiency of Simon X, a dyslexic student. The improvement in decoding and spelling may result in better reading comprehension and composition writing. Some computer programs and Internet materials can help regain the missing awareness and consequently restore his self-confidence and self-esteem. In addition, this study provides a systematic plan comprising a set of activities (four computer programs and Internet materials) which address the problem from the lowest to the highest levels of phoneme and phonological awareness. Four methods of data collection (accounts, observations, published tests, and interviews) create the triangulation to validly and reliably collect data before the plan, during the plan, and after the plan. The data collected are analyzed quantitatively and qualitatively. Sometimes the analysis is either quantitative or qualitative, and some other times a combination of both. Tables and figures are utilized to provide a clear and uncomplicated illustration of some data. The improvement in the decoding, spelling, reading comprehension, and composition writing skills that occurred is proved through the use of authentic materials performed by the student under study. Such materials are a comparison between two sample passages written by the learner before and after the plan, a genuine computer chat conversation, and the scores of the academic year that followed the execution of the plan. Based on these results, the researcher recommends further studies on other Lebanese dyslexic learners using the computer to mend their language problem in order to design and make a most reliable software program that can address this disability more efficiently and successfully.

Keywords: Analysis, awareness, dyslexic, software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606