Search results for: Computer Forensics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2354

Search results for: Computer Forensics

1934 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 443
1933 Digital Musical Organology: The Audio Games: The Question of “A-Musicological” Interfaces

Authors: Hervé Zénouda

Abstract:

This article seeks to shed light on an emerging creative field: "Audio games," at the crossroads between video games and computer music. Indeed, many applications, which propose entertaining audio-visual experiences with the objective of musical creation, are available today for different supports (game consoles, computers, cell phones). The originality of this field is the use of the gameplay of video games applied to music composition. Thus, composing music using interfaces but also cognitive logics that we qualify as "a-musicological" seem to us particularly interesting from the perspective of musical digital organology. This field raises questions about the representation of sound and musical structures and develops new instrumental gestures and strategies of musical composition. We will try in this article to define the characteristics of this field by highlighting some historical milestones (abstract cinema, game theory in music, actions, and graphic scores) as well as the novelties brought by digital technologies.

Keywords: audio-games, video games, computer generated music, gameplay, interactivity, synesthesia, sound interfaces, relationships image/sound, audiovisual music

Procedia PDF Downloads 92
1932 Examining Relationship between Programming Performance, Programming Self Efficacy and Math Success

Authors: Mustafa Ekici, Sacide Güzin Mazman

Abstract:

Programming is the one of ability in computer science fields which is generally perceived difficult by students and various individual differences have been implicated in that ability success. Although several factors that affect programming ability have been identified over the years, there is not still a full understanding of why some students learn to program easily and quickly while others find it complex and difficult. Programming self-efficacy and mathematic success are two of those essential individual differences which are handled as having important effect on the programming success. This study aimed to identify the relationship between programming performance, programming self efficacy and mathematics success. The study group is consisted of 96 undergraduates from Department of Econometrics of Uşak University. 38 (39,58%) of the participants are female while 58 (60,41%) of them are male. Study was conducted in the programming-I course during 2014-2015 fall term. Data collection tools are comprised of programming course final grades, programming self efficacy scale and a mathematics achievement test. Data was analyzed through correlation analysis. The result of study will be reported in the full text of the study.

Keywords: programming performance, self efficacy, mathematic success, computer science

Procedia PDF Downloads 479
1931 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 64
1930 Computer Simulation Studies of Spinel LiMn₂O₄ Nanotubes

Authors: D. M. Tshwane, R. R. Maphanga, P. E. Ngoepe

Abstract:

Nanostructured materials are attractive candidates for efficient electrochemical energy storage devices because of their unique physicochemical properties. Nanotubes have drawn a continuous attention because of their unique electrical, optical and magnetic properties contrast to that of bulk system. They have potential application in the field of optical, electronics and energy storage device. Introducing nanotubes structures as electrode materials; represents one of the most attractive strategies that could dramatically enhance the battery performance. Spinel LiMn2O4 is the most promising cathode material for Li-ion batteries. In this work, computer simulation methods are used to generate and investigate properties of spinel LiMn2O4 nanotubes. Molecular dynamic simulation is used to probe the local structure of LiMn2O4 nanotubes and the effect of temperature on these systems. It is found that diameter, Miller indices and size have a direct control on nanotubes morphology. Furthermore, it is noted that stability depends on surface and wrapping of the nanotube. The nanotube structures are described using the radial distribution function and XRD patterns. There is a correlation between calculated XRD and experimentally reported results.

Keywords: LiMn2O4, li-ion batteries, nanotubes, nanostructures

Procedia PDF Downloads 169
1929 Biodegradation Effects onto Source Identification of Diesel Fuel Contaminated Soils

Authors: Colin S. Chen, Chien-Jung Tien, Hsin-Jan Huang

Abstract:

For weathering studies, the change of chemical constituents by biodegradation effect in diesel-contaminated soils are important factors to be considered, especially when there is a prolonged period of weathering processes. The objective was to evaluate biodegradation effects onto hydrocarbon fingerprinting and distribution patterns of diesel fuels, fuel source screening and differentiation, source-specific marker compounds, and diagnostic ratios of diesel fuel constituents by laboratory and field studies. Biodegradation processes of diesel contaminated soils were evaluated by experiments lasting for 15 and 12 months, respectively. The degradation of diesel fuel in top soils was affected by organic carbon content and biomass of microorganisms in soils. Higher depletion of total petroleum hydrocarbon (TPH), n-alkanes, and polynuclear aromatic hydrocarbons (PAHs) and their alkyl homologues was observed in soils containing higher organic carbon content and biomass. Decreased ratio of selected isoprenoids (i.e., pristane (Pr) and phytane (Ph)) including n-C17/pristane and n-C18/phytane was observed. The ratio of pristane/phytane was remained consistent for a longer period of time. At the end of the experimental period, a decrease of pristane/phytane was observed. Biomarker compounds of bicyclic sesquiterpanes (BS) were less susceptible to the effects of biodegradation. The ratios of characteristic factors such as C15 sesquiterpane/ 8β(H)-drimane (BS3/BS5), C15 sesquiterpane/ 8β(H)-drimane (BS4/BS5), 8β(H)-drimane/8β(H)-homodrimane (BS5/BS10), and C15 sesquiterpane/8β(H)-homodrimane (BS3/BS10) could be adopted for source identification of diesel fuels in top soil. However, for biodegradation processes lasted for six months but shorter than nine months, only BS3/BS5 and BS3/BS10 could be distinguished in two diesel fuels. In subsoil experiments (contaminated soil located 50 cm below), the ratios of characteristic factors including BS3/BS5, BS4/BS5, and BS5/BS10 were valid for source identification of two diesel fuels for nine month biodegradation. At the early stage of contamination, biomass of soil decreased significantly. However, 6 and 7 dominant species were found in soils in top soil experiments, respectively. With less oxygen and nutrients in subsoil, less biomass of microorganisms was observed in subsoils. Only 2 and 4 diesel-degrading species of microorganisms were identified in two soils, respectively. Parameters of double ratio such as fluorene/C1-fluorene: C2-phenanthrene/C3-phenanthrene (C0F/C1F:C2P/C3P) in both top and subsoil, C2-naphthalene/C2-phenanthrene: C1-phenanthrene/C3-phenanthrene (C2N/C2P:C1P/C3P), and C1-phenanthrene/C1-fluorene: C3-naphthalene/C3-phenanthrene (C1P/C1F:C3N/C3P) in subsoil could serve as forensic indicators in diesel contaminated sites. BS3/BS10:BS4/BS5 could be used in 6 to 9 months of biodegradation processes. Results of principal component analysis (PCA) indicated that source identification of diesel fuels in top soil could only be perofrmed for weathering process less than 6 months. For subsoil, identification can be conducted for weathering process less than 9 months. Ratio of isoprenoids (pristane and phytane) and PAHs might be affected by biodegradation in spilled sites. The ratios of bicyclic sesquiterpanes could serve as forensic indicators in diesel-contaminated soils. Finally, source identification was attemped for samples collected from different fuel contaminated sites by using the unique pattern of sesquiterpanes. It was anticipated that the information generated from this study would be adopted by decision makers to evaluate the liability of cleanup in diesel contaminated sites.

Keywords: biodegradation, diagnostic ratio, diesel fuel, environmental forensics

Procedia PDF Downloads 195
1928 Deployment of Information and Communication Technology (ICT) to Reduce Occurrences of Terrorism in Nigeria

Authors: Okike Benjamin

Abstract:

Terrorism is the use of violence and threat to intimidate or coerce a person, group, society or even government especially for political purposes. Terrorism may be a way of resisting government by some group who may feel marginalized. It could also be a way of expressing displeasure over the activities of government. On 26th December, 2009, US placed Nigeria as a terrorist nation. Recently, the occurrences of terrorism in Nigeria have increased considerably. In Jos, Plateau state, Nigeria, there was a bomb blast which claimed many lives on the eve of 2010 Christmas. Similarly, there was another bomb blast in Mugadishi (Sani Abacha) Barracks Mammy market on the eve of 2011 New Year. For some time now, it is no longer news that bomb exploded in some Northern part of Nigeria. About 25 years ago, stopping terrorism in America by the Americans relied on old-fashioned tools such as strict physical security at vulnerable places, intelligence gathering by government agents, or individuals, vigilance on the part of all citizens, and a sense of community in which citizens do what could be done to protect each other. Just as technology has virtually been used to better the way many other things are done, so also this powerful new weapon called computer technology can be used to detect and prevent terrorism not only in Nigeria, but all over the world. This paper will x-ray the possible causes and effects of bomb blast, which is an act of terrorism and suggest ways in which Explosive Detection Devices (EDDs) and computer software technology could be deployed to reduce the occurrences of terrorism in Nigeria. This become necessary with the abduction of over 200 schoolgirls in Chibok, Borno State from their hostel by members of Boko Haram sect members on 14th April, 2014. Presently, Barrack Obama and other world leaders have sent some of their military personnel to help rescue those innocent schoolgirls whose offence is simply seeking to acquire western education which the sect strongly believe is forbidden.

Keywords: terrorism, bomb blast, computer technology, explosive detection devices, Nigeria

Procedia PDF Downloads 244
1927 Digital Literacy Skills for Geologist in Public Sector

Authors: Angsumalin Puntho

Abstract:

Disruptive technology has had a great influence on our everyday lives and the existence of an organization. Geologists in the public sector need to keep up with digital technology and be able to work and collaborate in a more effective manner. The result from SWOT and 7S McKinsey analyses suggest that there are inadequate IT personnel, no individual digital literacy development plan, and a misunderstanding of management policies. The Office of Civil Service Commission develops digital literacy skills that civil servants and government officers should possess in order to work effectively; it consists of nine dimensions, including computer skills, internet skills, cyber security awareness, word processing, spreadsheets, presentation programs, online collaboration, graphics editors and cyber security practices; and six steps of digital literacy development including self-assessment, individual development plan, self-learning, certified test, learning reflection, and practices. Geologists can use digital literacy as a learning tool to develop themselves for better career opportunities.

Keywords: disruptive technology, digital technology, digital literacy, computer skills

Procedia PDF Downloads 88
1926 Significance of Personnel Recruitment in Implementation of Computer Aided Design Curriculum of Architecture Schools

Authors: Kelechi E. Ezeji

Abstract:

The inclusion of relevant content in curricula of architecture schools is vital for attainment of Computer Aided Design (CAD) proficiency by graduates. Implementing this content involves, among other variables, the presence of competent tutors. Consequently, this study sought to investigate the importance of personnel recruitment for inclusion of content vital to the implementation of CAD in the curriculum for architecture education. This was with a view to developing a framework for appropriate implementation of CAD curriculum. It was focused on departments of architecture in universities in south-east Nigeria which have been accredited by National Universities Commission. Survey research design was employed. Data were obtained from sources within the study area using questionnaires, personal interviews, physical observation/enumeration and examination of institutional documents. A multi-stage stratified random sampling method was adopted. The first stage of stratification involved random sampling by balloting of the departments. The second stage involved obtaining respondents’ population from the number of staff and students of sample population. Chi Square analysis tool for nominal variables and Pearson’s product moment correlation test for interval variables were used for data analysis. With ρ < 0.5, the study found significant correlation between the number of CAD literate academic staff and use of CAD in design studio/assignments; that increase in the overall number of teaching staff significantly affected total CAD credit units in the curriculum of the department. The implications of these findings were that for successful implementation leading to attainment of CAD proficiency to occur, CAD-literacy should be a factor in the recruitment of staff and a policy of in-house training should be pursued.

Keywords: computer-aided design, education, personnel recruitment, curriculum

Procedia PDF Downloads 183
1925 Over the Air Programming Method for Learning Wireless Sensor Networks

Authors: K. Sangeeth, P. Rekha, P. Preeja, P. Divya, R. Arya, R. Maneesha

Abstract:

Wireless sensor networks (WSN) are small or tiny devices that consists of different sensors to sense physical parameters like air pressure, temperature, vibrations, movement etc., process these data and sends it to the central data center to take decisions. The WSN domain, has wide range of applications such as monitoring and detecting natural hazards like landslides, forest fire, avalanche, flood monitoring and also in healthcare applications. With such different applications, it is being taught in undergraduate/post graduate level in many universities under department of computer science. But the cost and infrastructure required to purchase WSN nodes for having the students getting hands on expertise on these devices is expensive. This paper gives overview about the remote triggered lab that consists of more than 100 WSN nodes that helps the students to remotely login from anywhere in the world using the World Wide Web, configure the nodes and learn the WSN concepts in intuitive way. It proposes new way called over the air programming (OTAP) and its internals that program the 100 nodes simultaneously and view the results without the nodes being physical connected to the computer system, thereby allowing for sparse deployment.

Keywords: WSN, over the air programming, virtual lab, AT45DB

Procedia PDF Downloads 353
1924 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach

Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii

Abstract:

We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.

Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics

Procedia PDF Downloads 92
1923 Adjustment and Compensation Techniques for the Rotary Axes of Five-axis CNC Machine Tools

Authors: Tung-Hui Hsu, Wen-Yuh Jywe

Abstract:

Five-axis computer numerical control (CNC) machine tools (three linear and two rotary axes) are ideally suited to the fabrication of complex work pieces, such as dies, turbo blades, and cams. The locations of the axis average line and centerline of the rotary axes strongly influence the performance of these machines; however, techniques to compensate for eccentric error in the rotary axes remain weak. This paper proposes optical (Non-Bar) techniques capable of calibrating five-axis CNC machine tools and compensating for eccentric error in the rotary axes. This approach employs the measurement path in ISO/CD 10791-6 to determine the eccentric error in two rotary axes, for which compensatory measures can be implemented. Experimental results demonstrate that the proposed techniques can improve the performance of various five-axis CNC machine tools by more than 90%. Finally, a result of the cutting test using a B-type five-axis CNC machine tool confirmed to the usefulness of this proposed compensation technique.

Keywords: calibration, compensation, rotary axis, five-axis computer numerical control (CNC) machine tools, eccentric error, optical calibration system, ISO/CD 10791-6

Procedia PDF Downloads 358
1922 Trainees' Perception of Virtual Learning Skills in Setting up the Simulator Welding Technology

Authors: Mohd Afif Md Nasir, Mohd Faizal Amin Nur, Jamaluddin Hasim, Abd Samad Hasan Basari, Mohd Halim Sahelan

Abstract:

This study is aimed to investigate the suitability of Computer-Based Training (CBT) as one of the approaches in skills competency development at the Centre of Instructor and Advanced Skills Training (CIAST) Shah Alam Selangor and National Youth Skills Institute (NYSI) Pagoh Muar Johor. This study has also examined the perception among trainees toward Virtual Learning Environment (VLE) as to realize the development of skills in Welding Technology. The significance of the study is to create a computer-based skills development approach in welding technology among new trainees in CIAST and IKBN as well as to cultivate the element of general skills among them. This study is also important in elevating the number of individual knowledge workers (K-Workers) working in manufacturing industry in order to achieve the national vision which is to be an industrial nation in the year 2020. The design is a survey of research which using questionnaires as the instruments and is conducted towards 136 trainees from CIAST and IKBN. Data from the questionnaires is proceeding in a Statistical Package for Social Science (SPSS) in order to find the frequency, mean and chi-square testing. The findings of the study show the welding technology skills have developed in the trainees as a result of the application of the Virtual Reality simulator at a high level (mean=3.90) and the respondents agreed the skills could be embedded through the application of the Virtual Reality simulator (78.01%). The Study also found that there is a significant difference between trainee skill characteristics through the application of the Virtual Reality simulator (p<0.05). Thereby, the Virtual Reality simulator is suitable to be used in the development of welding skills among trainees through the skills training institute.

Keywords: computer-based training, virtual learning environment, welding technology, virtual reality simulator, virtual learning environment

Procedia PDF Downloads 400
1921 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 201
1920 Assessment of E-Learning Facilities in Open and Distance Learning and Information Need by Students

Authors: Sabo Elizabeth

Abstract:

Electronic learning is increasingly popular learning approach in higher educational institutions due to vast growth of internet technology. This approach is important in human capital development. An investigation of open distance and e-learning facilities and information need by open and distance learning students was carried out in Jalingo, Nigeria. Structured questionnaires were administered to 70 registered ODL students of the NOUN. Information sourced from the respondents covered demographic, economic and institutional variables. Data collected for demographic variables were computed as frequency count and percentages. Assessment of the effectiveness of ODL facilities and information need among open and distance learning students was computed on a three or four point Likert Rating Scale. Findings indicated that there are more men compared to women. A large proportion of the respondents are married and there are more matured students in ODL compared to the youth. A high proportion of the ODL students obtained qualifications higher than the secondary school certificate. The proportion of computer literate ODL students was high, and large number of the students does not own a laptop computer. Inadequate e -books and reference materials, internet gadgets and inadequate books (hard copies) and reference material are factors that limit utilization of e-learning facilities in the study areas. Inadequate computer facilities and power back up caused inconveniences and delay in administering and use of e learning facilities. To a high extent, open and distance learning students needed information on university time table and schedule of activities, availability and access to books (hard and e-books) and reference materials. The respondents emphasized that contact with course coordinators via internet will provide a better learning and academic performance.

Keywords: open and distance learning, information required, electronic books, internet gadgets, Likert scale test

Procedia PDF Downloads 304
1919 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 287
1918 “Presently”: A Personal Trainer App to Self-Train and Improve Presentation Skills

Authors: Shyam Mehraaj, Samanthi E. R. Siriwardana, Shehara A. K. G. H., Wanigasinghe N. T., Wandana R. A. K., Wedage C. V.

Abstract:

A presentation is a critical tool for conveying not just spoken information but also a wide spectrum of human emotions. The single most effective thing to make the presentation successful is to practice it beforehand. Preparing for a presentation has been shown to be essential for improving emotional control, intonation and prosody, pronunciation, and vocabulary, as well as the quality of the presentation slides. As a result, practicing has become one of the most critical parts of giving a good presentation. In this research, the main focus is to analyze the audio, video, and slides of the presentation uploaded by the presenters. This proposed solution is based on the Natural Language Processing and Computer Vision techniques to cater to the requirement for the presenter to do a presentation beforehand using a mobile responsive web application. The proposed system will assist in practicing the presentation beforehand by identifying the presenters’ emotions, body language, tonality, prosody, pronunciations and vocabulary, and presentation slides quality. Overall, the system will give a rating and feedback to the presenter about the performance so that the presenters’ can improve their presentation skills.

Keywords: presentation, self-evaluation, natural learning processing, computer vision

Procedia PDF Downloads 99
1917 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision

Procedia PDF Downloads 459
1916 Improving Digital Data Security Awareness among Teacher Candidates with Digital Storytelling Technique

Authors: Veysel Çelik, Aynur Aker, Ebru Güç

Abstract:

Developments in information and communication technologies have increased both the speed of producing information and the speed of accessing new information. Accordingly, the daily lives of individuals have started to change. New concepts such as e-mail, e-government, e-school, e-signature have emerged. For this reason, prospective teachers who will be future teachers or school administrators are expected to have a high awareness of digital data security. The aim of this study is to reveal the effect of the digital storytelling technique on the data security awareness of pre-service teachers of computer and instructional technology education departments. For this purpose, participants were selected based on the principle of volunteering among third-grade students studying at the Computer and Instructional Technologies Department of the Faculty of Education at Siirt University. In the research, the pretest/posttest half experimental research model, one of the experimental research models, was used. In this framework, a 6-week lesson plan on digital data security awareness was prepared in accordance with the digital narration technique. Students in the experimental group formed groups of 3-6 people among themselves. The groups were asked to prepare short videos or animations for digital data security awareness. The completed videos were watched and evaluated together with prospective teachers during the evaluation process, which lasted approximately 2 hours. In the research, both quantitative and qualitative data collection tools were used by using the digital data security awareness scale and the semi-structured interview form consisting of open-ended questions developed by the researchers. According to the data obtained, it was seen that the digital storytelling technique was effective in creating data security awareness and creating permanent behavior changes for computer and instructional technology students.

Keywords: digital storytelling, self-regulation, digital data security, teacher candidates, self-efficacy

Procedia PDF Downloads 102
1915 Challenges of Online Education and Emerging E-Learning Technologies in Nigerian Tertiary Institutions Using Adeyemi College of Education as a Case Study

Authors: Oluwatofunmi Otobo

Abstract:

This paper presents a review of the challenges of e-learning and e-learning technologies in tertiary institutions. This review is based on the researchers observations of the challenges of making use of ICT for learning in Nigeria using Adeyemi College of Education as a case study; this is in comparison to tertiary institutions in the UK, US and other more developed countries. In Nigeria and probably Africa as a whole, power is the major challenge. Its inconsistency and fluctuations pose the greatest challenge to making use of online education inside and outside the classroom. Internet and its supporting infrastructures in many places in Nigeria are slow and unreliable. This, in turn, could frustrate any attempt at making use of online education and e-learning technologies. Lack of basic knowledge of computer, its technologies and facilities could also prove to be a challenge as many young people up until now are yet to be computer literate. Personal interest on both the parts of lecturers and students is also a challenge. Many people are not interested in learning how to make use of technologies. This makes them resistant to changing from the ancient methods of doing things. These and others were reviewed by this paper, suggestions, and recommendations were proffered.

Keywords: education, e-learning, Nigeria, tertiary institutions

Procedia PDF Downloads 166
1914 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 67
1913 Combining Real Actors with Virtual Sets: The Future of Immersive Virtual Reality Fiction Cinema

Authors: Nefeli Dimitriadi

Abstract:

This paper aims to present immersive cinema where real actors are filmed and integrated in Virtual Reality environments and 360 cinematic narrative, in comparison to 360 filming of real actors and sets and to fully computer graphics animation movies with 3D avatars. Objectives: This reseach aims to present immersive cinema where real actors are integrated in Virrual Reality environments and 360 cinematic narrative as the future of immersive cinema. Meghdology: A comparative analysis is conducted between real actors filming combined with Virtual Reality sets, to 360 filming of real actors and sets, and to fully computer graphics animation movies with 3D avatars, using as case study Virtual Reality movie Neurosynapses and others. Contribution: This reseach contributes in defining the best practices leading to impactful Immersive cinematic narratives.

Keywords: virtual reality, 360 movies, immersive cinema, directing for virtual reality

Procedia PDF Downloads 95
1912 A Method for Multimedia User Interface Design for Mobile Learning

Authors: Shimaa Nagro, Russell Campion

Abstract:

Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.

Keywords: human-computer interaction, interface design, mobile learning, education

Procedia PDF Downloads 222
1911 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 76
1910 A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network

Authors: Marcio Leal, Marta Villamil

Abstract:

Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.

Keywords: artificial neural network, computer vision, dynamic time warping, infrared, sign language recognition

Procedia PDF Downloads 191
1909 A New Approach towards the Development of Next Generation CNC

Authors: Yusri Yusof, Kamran Latif

Abstract:

Computer Numeric Control (CNC) machine has been widely used in the industries since its inception. Currently, in CNC technology has been used for various operations like milling, drilling, packing and welding etc. with the rapid growth in the manufacturing world the demand of flexibility in the CNC machines has rapidly increased. Previously, the commercial CNC failed to provide flexibility because its structure was of closed nature that does not provide access to the inner features of CNC. Also CNC’s operating ISO data interface model was found to be limited. Therefore, to overcome that problem, Open Architecture Control (OAC) technology and STEP-NC data interface model are introduced. At present the Personal Computer (PC) has been the best platform for the development of open-CNC systems. In this paper, both ISO data interface model interpretation, its verification and execution has been highlighted with the introduction of the new techniques. The proposed is composed of ISO data interpretation, 3D simulation and machine motion control modules. The system is tested on an old 3 axis CNC milling machine. The results are found to be satisfactory in performance. This implementation has successfully enabled sustainable manufacturing environment.

Keywords: CNC, ISO 6983, ISO 14649, LabVIEW, open architecture control, reconfigurable manufacturing systems, sustainable manufacturing, Soft-CNC

Procedia PDF Downloads 493
1908 Emotiv EPOC BCI Matrix Speller Based on Single Emokey

Authors: S. M. Abdullah Al Mamun

Abstract:

Human Computer Interaction (HCI) is an excellent area for the researchers to make daily life more simple and fast. Necessary hardware equipments for any BCI are generally expensive and not affordable for most of the people. Emotiv is one of the solutions for this problem, which can provide electroencephalograph (EEG) signal and explain the brain activities. BCI virtual speller was one of the important applications for the people who have lost their hand or speaking ability because of diseases or unexpected accident. In this paper, a matrix speller has been designed for the first time for Bengali speaking people around the world. Bengali is one of the most commonly spoken languages. Among them, a lot of disabled person will be able to express their desire in their mother tongue. This application is also usable for the social networks and daily life communications. For this virtual keyboard, the well-known matrix speller method with column flashing is applied and controlled by single Emokey only. Emokey is a great feature which translates emotional state for application inputs. In this paper, it is presented that the ITR (Information Transfer Rate) were 29.4 bits/min and typing speed achieved up to 7.43 char/per min.

Keywords: brain computer interface, Emotiv EPOC, EEG, virtual keyboard, matrix speller

Procedia PDF Downloads 279
1907 Image Classification with Localization Using Convolutional Neural Networks

Authors: Bhuyain Mobarok Hossain

Abstract:

Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN).

Keywords: image classification, object detection, localization, particle filter

Procedia PDF Downloads 279
1906 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules

Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel

Abstract:

The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.

Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling

Procedia PDF Downloads 110
1905 Individual Differences and Paired Learning in Virtual Environments

Authors: Patricia M. Boechler, Heather M. Gautreau

Abstract:

In this research study, postsecondary students completed an information learning task in an avatar-based 3D virtual learning environment. Three factors were of interest in relation to learning; 1) the influence of collaborative vs. independent conditions, 2) the influence of the spatial arrangement of the virtual environment (linear, random and clustered), and 3) the relationship of individual differences such as spatial skill, general computer experience and video game experience to learning. Students completed pretest measures of prior computer experience and prior spatial skill. Following the premeasure administration, students were given instruction to move through the virtual environment and study all the material within 10 information stations. In the collaborative condition, students proceeded in randomly assigned pairs, while in the independent condition they proceeded alone. After this learning phase, all students individually completed a multiple choice test to determine information retention. The overall results indicated that students in pairs did not perform any better or worse than independent students. As far as individual differences, only spatial ability predicted the performance of students. General computer experience and video game experience did not. Taking a closer look at the pairs and spatial ability, comparisons were made on pairs high/matched spatial ability, pairs low/matched spatial ability and pairs that were mismatched on spatial ability. The results showed that both high/matched pairs and mismatched pairs outperformed low/matched pairs. That is, if a pair had even one individual with strong spatial ability they would perform better than pairs with only low spatial ability individuals. This suggests that, in virtual environments, the specific individuals that are paired together are important for performance outcomes. The paper also includes a discussion of trends within the data that have implications for virtual environment education.

Keywords: avatar-based, virtual environment, paired learning, individual differences

Procedia PDF Downloads 92