Search results for: pen and paper or computer
25822 Peer Corrective Feedback on Written Errors in Computer-Mediated Communication
Authors: S. H. J. Liu
Abstract:
This paper aims to explore the role of peer Corrective Feedback (CF) in improving written productions by English-as-a- foreign-language (EFL) learners who work together via Wikispaces. It attempted to determine the effect of peer CF on form accuracy in English, such as grammar and lexis. Thirty-four EFL learners at the tertiary level were randomly assigned into the experimental (with peer feedback) or the control (without peer feedback) group; each group was subdivided into small groups of two or three. This resulted in six and seven small groups in the experimental and control groups, respectively. In the experimental group, each learner played a role as an assessor (providing feedback to others), as well as an assessee (receiving feedback from others). Each participant was asked to compose his/her written work and revise it based on the feedback. In the control group, on the other hand, learners neither provided nor received feedback but composed and revised their written work on their own. Data collected from learners’ compositions and post-task interviews were analyzed and reported in this study. Following the completeness of three writing tasks, 10 participants were selected and interviewed individually regarding their perception of collaborative learning in the Computer-Mediated Communication (CMC) environment. Language aspects to be analyzed included lexis (e.g., appropriate use of words), verb tenses (e.g., present and past simple), prepositions (e.g., in, on, and between), nouns, and articles (e.g., a/an). Feedback types consisted of CF, affective, suggestive, and didactic. Frequencies of feedback types and the accuracy of the language aspects were calculated. The results first suggested that accurate items were found more in the experimental group than in the control group. Such results entail that those who worked collaboratively outperformed those who worked non-collaboratively on the accuracy of linguistic aspects. Furthermore, the first type of CF (e.g., corrections directly related to linguistic errors) was found to be the most frequently employed type, whereas affective and didactic were the least used by the experimental group. The results further indicated that most participants perceived that peer CF was helpful in improving the language accuracy, and they demonstrated a favorable attitude toward working with others in the CMC environment. Moreover, some participants stated that when they provided feedback to their peers, they tended to pay attention to linguistic errors in their peers’ work but overlook their own errors (e.g., past simple tense) when writing. Finally, L2 or FL teachers or practitioners are encouraged to employ CMC technologies to train their students to give each other feedback in writing to improve the accuracy of the language and to motivate them to attend to the language system.Keywords: peer corrective feedback, computer-mediated communication (CMC), second or foreign language (L2 or FL) learning, Wikispaces
Procedia PDF Downloads 24525821 Probabilistic Safety Assessment of Koeberg Spent Fuel Pool
Authors: Sibongiseni Thabethe, Ian Korir
Abstract:
The effective management of spent fuel pool (SFP) safety has been raised as one of the emerging issues to further enhance nuclear installation safety after the Fukushima accident on March 11, 2011. Before then, SFP safety-related issues have been mainly focused on (a) controlling the configuration of the fuel assemblies in the pool with no loss of pool coolants and (b) ensuring adequate pool storage space to prevent fuel criticality owing to chain reactions of the fission products and the ability for neutron absorption to keep the fuel cool. A probabilistic safety (PSA) assessment was performed using the systems analysis program for hands-on integrated reliability evaluations (SAPHIRE) computer code. Event and fault tree analysis was done to develop a PSA model for the Koeberg SFP. We present preliminary PSA results of events that lead to boiling and cause fuel uncovering, resulting in possible fuel damage in the Koeberg SFP.Keywords: computer code, fuel assemblies, probabilistic risk assessment, spent fuel pool
Procedia PDF Downloads 16825820 The Challenges of Cloud Computing Adoption in Nigeria
Authors: Chapman Eze Nnadozie
Abstract:
Cloud computing, a technology that is made possible through virtualization within networks represents a shift from the traditional ownership of infrastructure and other resources by distinct organization to a more scalable pattern in which computer resources are rented online to organizations on either as a pay-as-you-use basis or by subscription. In other words, cloud computing entails the renting of computing resources (such as storage space, memory, servers, applications, networks, etc.) by a third party to its clients on a pay-as-go basis. It is a new innovative technology that is globally embraced because of its renowned benefits, profound of which is its cost effectiveness on the part of organizations engaged with its services. In Nigeria, the services are provided either directly to companies mostly by the key IT players such as Microsoft, IBM, and Google; or in partnership with some other players such as Infoware, Descasio, and Sunnet. This action enables organizations to rent IT resources on a pay-as-you-go basis thereby salvaging them from wastages accruable on acquisition and maintenance of IT resources such as ownership of a separate data centre. This paper intends to appraise the challenges of cloud computing adoption in Nigeria, bearing in mind the country’s peculiarities’ in terms of infrastructural development. The methodologies used in this paper include the use of research questionnaires, formulated hypothesis, and the testing of the formulated hypothesis. The major findings of this paper include the fact that there are some addressable challenges to the adoption of cloud computing in Nigeria. Furthermore, the country will gain significantly if the challenges especially in the area of infrastructural development are well addressed. This is because the research established the fact that there are significant gains derivable by the adoption of cloud computing by organizations in Nigeria. However, these challenges can be overturned by concerted efforts in the part of government and other stakeholders.Keywords: cloud computing, data centre, infrastructure, it resources, virtualization
Procedia PDF Downloads 35125819 Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision
Authors: Dilip K. Prasad, C. Krishna Prasath, Deepu Rajan, Lily Rachmawati, Eshan Rajabally, Chai Quek
Abstract:
This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here.Keywords: autonomous maritime vehicle, object detection, situation awareness, tracking
Procedia PDF Downloads 45725818 Adaptive Motion Planning for 6-DOF Robots Based on Trigonometric Functions
Authors: Jincan Li, Mingyu Gao, Zhiwei He, Yuxiang Yang, Zhongfei Yu, Yuanyuan Liu
Abstract:
Building an appropriate motion model is crucial for trajectory planning of robots and determines the operational quality directly. An adaptive acceleration and deceleration motion planning based on trigonometric functions for the end-effector of 6-DOF robots in Cartesian coordinate system is proposed in this paper. This method not only achieves the smooth translation motion and rotation motion by constructing a continuous jerk model, but also automatically adjusts the parameters of trigonometric functions according to the variable inputs and the kinematic constraints. The results of computer simulation show that this method is correct and effective to achieve the adaptive motion planning for linear trajectories.Keywords: kinematic constraints, motion planning, trigonometric function, 6-DOF robots
Procedia PDF Downloads 27125817 Computer Simulation Studies of Aircraft Wing Architectures on Vibration Responses
Authors: Shengyong Zhang, Mike Mikulich
Abstract:
Vibration is a crucial limiting consideration in the analysis and design of airplane wing structures to avoid disastrous failures due to the propagation of existing cracks in the material. In this paper, we build CAD models of aircraft wings to capture the design intent with configurations. Subsequent FEA vibration analysis is performed to study the natural vibration properties and impulsive responses of the resulting user-defined wing models. This study reveals the variations of the wing’s vibration characteristics with respect to changes in its structural configurations. Integrating CAD modelling and FEA vibration analysis enables designers to improve wing architectures for implementing design requirements in the preliminary design stage.Keywords: aircraft wing, CAD modelling, FEA, vibration analysis
Procedia PDF Downloads 16525816 Numerical Simulation and Optimal Control in Gas Dynamic Laser GDLs
Authors: Laggoun Chouki
Abstract:
In this paper we present the design and mechanisms of the physics process and discuss the performances of continuous gas laser dynamics, based on molecules N2(v=1)→C02(001)(v=3). The main objectives of work in this area are, obtaining the high laser energies in short time durations needed for the feasibility studies the physical principles that can be used to make laser sources capable of delivering high average powers. We note that, in order to reach both objectives, one has to convert electrical or chemical energy into laser energy, using gaseous media. The process generating the wave excited, on the basis of the excited level vibration, Theoretical predictions are compared with experimental results. The feasibility and effectiveness of the proposed method is demonstrated by computer simulation.Keywords: modelling, lasers, gas, numerical, nozzle
Procedia PDF Downloads 8225815 High Gain Mobile Base Station Antenna Using Curved Woodpile EBG Technique
Authors: P. Kamphikul, P. Krachodnok, R. Wongsan
Abstract:
This paper presents the gain improvement of a sector antenna for mobile phone base station by using the new technique to enhance its gain for microstrip antenna (MSA) array without construction enlargement. The curved woodpile Electromagnetic Band Gap (EBG) has been utilized to improve the gain instead. The advantages of this proposed antenna are reducing the length of MSAs array but providing the higher gain and easy fabrication and installation. Moreover, it provides a fan-shaped radiation pattern, wide in the horizontal direction and relatively narrow in the vertical direction, which appropriate for mobile phone base station. The paper also presents the design procedures of a 1x8 MSAs array associated with U-shaped reflector for decreasing their back and side lobes. The fabricated curved woodpile EBG exhibits bandgap characteristics at 2.1 GHz and is utilized for realizing a resonant cavity of MSAs array. This idea has been verified by both the Computer Simulation Technology (CST) software and experimental results. As the results, the fabricated proposed antenna achieves a high gain of 20.3 dB and the half-power beam widths in the E- and H-plane of 36.8 and 8.7 degrees, respectively. Good qualitative agreement between measured and simulated results of the proposed antenna was obtained.Keywords: gain improvement, microstrip antenna array, electromagnetic band gap, base station
Procedia PDF Downloads 31125814 On Cloud Computing: A Review of the Features
Authors: Assem Abdel Hamed Mousa
Abstract:
The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.Keywords: internet, cloud computing, ubiquitous computing, big data
Procedia PDF Downloads 38225813 Towards a Proof Acceptance by Overcoming Challenges in Collecting Digital Evidence
Authors: Lilian Noronha Nassif
Abstract:
Cybercrime investigation demands an appropriated evidence collection mechanism. If the investigator does not acquire digital proofs in a forensic sound, some important information can be lost, and judges can discard case evidence because the acquisition was inadequate. The correct digital forensic seizing involves preparation of professionals from fields of law, police, and computer science. This paper presents important challenges faced during evidence collection in different perspectives of places. The crime scene can be virtual or real, and technical obstacles and privacy concerns must be considered. All pointed challenges here highlight the precautions to be taken in the digital evidence collection and the suggested procedures contribute to the best practices in the digital forensics field.Keywords: digital evidence, digital forensics process and procedures, mobile forensics, cloud forensics
Procedia PDF Downloads 40625812 Enhancing Fall Detection Accuracy with a Transfer Learning-Aided Transformer Model Using Computer Vision
Authors: Sheldon McCall, Miao Yu, Liyun Gong, Shigang Yue, Stefanos Kollias
Abstract:
Falls are a significant health concern for older adults globally, and prompt identification is critical to providing necessary healthcare support. Our study proposes a new fall detection method using computer vision based on modern deep learning techniques. Our approach involves training a trans- former model on a large 2D pose dataset for general action recognition, followed by transfer learning. Specifically, we freeze the first few layers of the trained transformer model and train only the last two layers for fall detection. Our experimental results demonstrate that our proposed method outperforms both classical machine learning and deep learning approaches in fall/non-fall classification. Overall, our study suggests that our proposed methodology could be a valuable tool for identifying falls.Keywords: healthcare, fall detection, transformer, transfer learning
Procedia PDF Downloads 14425811 Epistemic Uncertainty Analysis of Queue with Vacations
Authors: Baya Takhedmit, Karim Abbas, Sofiane Ouazine
Abstract:
The vacations queues are often employed to model many real situations such as computer systems, communication networks, manufacturing and production systems, transportation systems and so forth. These queueing models are solved at fixed parameters values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). In this paper, we consider the M/G/1/N queue with server vacation and exhaustive discipline where we assume that the vacation parameter values have uncertainty. We use the Taylor series expansions approach to estimate the expectation and variance of model output, due to epistemic uncertainties in the model input parameters.Keywords: epistemic uncertainty, M/G/1/N queue with vacations, non-parametric sensitivity analysis, Taylor series expansion
Procedia PDF Downloads 43325810 Neural Style Transfer Using Deep Learning
Authors: Shaik Jilani Basha, Inavolu Avinash, Alla Venu Sai Reddy, Bitragunta Taraka Ramu
Abstract:
We can use the neural style transfer technique to build a picture with the same "content" as the beginning image but the "style" of the picture we've chosen. Neural style transfer is a technique for merging the style of one image into another while retaining its original information. The only change is how the image is formatted to give it an additional artistic sense. The content image depicts the plan or drawing, as well as the colors of the drawing or paintings used to portray the style. It is a computer vision programme that learns and processes images through deep convolutional neural networks. To implement software, we used to train deep learning models with the train data, and whenever a user takes an image and a styled image, the output will be as the style gets transferred to the original image, and it will be shown as the output.Keywords: neural networks, computer vision, deep learning, convolutional neural networks
Procedia PDF Downloads 9525809 Students' Perception of Virtual Learning Environment (VLE) Skills in Setting up the Simulator Welding Technology
Authors: Mohd Afif Md Nasir, Faizal Amin Nur Yunus, Jamaluddin Hashim, Abd Samad Hassan Basari, A. Halim Sahelan
Abstract:
The aim of this study is to identify the suitability of Virtual Learning Environment (VLE) in welding simulator application towards Computer-Based Training (CBT) in developing skills upon new students at the Advanced Technology Training Center (ADTEC), Batu Pahat, Johor, Malaysia and GIATMARA, Batu Pahat, Johor, Malaysia. The purpose of the study is to create a computer-based skills development approach in welding technology among new students in ADTEC and GIATMARA, as well as cultivating the elements of general skills among them. This study is also important in elevating the number of individual knowledge workers (K-workers) working in manufacturing industry in order to achieve a national vision which is to be an industrial nation in the year of 2020. The design of the study is a survey type of research which uses questionnaires as the instruments and 136 students from ADTEC and GIATMARA were interviewed. Descriptive analysis is used to identify the frequency and mean values. The findings of the study shows that the welding technology skills have developed in the students as a result of the application of VLE simulator at a high level and the respondents agreed that the skills could be embedded through the application of the VLE simulator. In summary, the VLE simulator is suitable in welding skills development training in terms of exposing new students with the relevant characteristics of welding skills and at the same time spurring the students’ interest towards learning more about the skills.Keywords: computer-based training (CBT), knowledge workers (K-workers), virtual learning environment, welding simulator, welding technology
Procedia PDF Downloads 34825808 An Architectural Model of Multi-Agent Systems for Student Evaluation in Collaborative Game Software
Authors: Monica Hoeldtke Pietruchinski, Andrey Ricardo Pimentel
Abstract:
The teaching of computer programming for beginners has been presented to the community as a not simple or trivial task. Several methodologies and research tools have been developed; however, the problem still remains. This paper aims to present multi-agent system architecture to be incorporated to the educational collaborative game software for teaching programming that monitors, evaluates and encourages collaboration by the participants. A literature review has been made on the concepts of Collaborative Learning, Multi-agents systems, collaborative games and techniques to teach programming using these concepts simultaneously.Keywords: architecture of multi-agent systems, collaborative evaluation, collaboration assessment, gamifying educational software
Procedia PDF Downloads 46325807 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 26525806 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects
Authors: Ma Yuzhe, Burra Venkata Durga Kumar
Abstract:
The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.Keywords: Linux, operating system, system management, security
Procedia PDF Downloads 10825805 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled
Authors: Rishabh Ambavanekar
Abstract:
Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis
Procedia PDF Downloads 11925804 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing
Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello
Abstract:
In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction
Procedia PDF Downloads 42625803 Lamb Waves Propagation in Elastic-Viscoelastic Three-Layer Adhesive Joints
Authors: Pezhman Taghipour Birgani, Mehdi Shekarzadeh
Abstract:
In this paper, the propagation of lamb waves in three-layer joints is investigated using global matrix method. Theoretical boundary value problem in three-layer adhesive joints with perfect bond and traction free boundary conditions on their outer surfaces is solved to find a combination of frequencies and modes with the lowest attenuation. The characteristic equation is derived by applying continuity and boundary conditions in three-layer joints using global matrix method. Attenuation and phase velocity dispersion curves are obtained with numerical solution of this equation by a computer code for a three-layer joint, including an aluminum repair patch bonded to the aircraft aluminum skin by a layer of viscoelastic epoxy adhesive. To validate the numerical solution results of the characteristic equation, wave structure curves are plotted for a special mode in two different frequencies in the adhesive joint. The purpose of present paper is to find a combination of frequencies and modes with minimum attenuation in high and low frequencies. These frequencies and modes are recognizable by transducers in inspections with Lamb waves because of low attenuation level.Keywords: three-layer adhesive joints, viscoelastic, lamb waves, global matrix method
Procedia PDF Downloads 39325802 Using Eye-Tracking to Investigate TEM Validity and Design
Authors: Cao Xi
Abstract:
This paper reports a study which used eye-tracking to examine the cognitive validity of TEM 8(Test for English Majors, Band 8). The study investigated test takers' reading patterns on four -item types using eye-tracking, and interviews. Thirty participants completed 22 items on a computer, with the Tobii X2 Eye Tracker recording their eye movements on screen. Eleven students further participated in a recall interview while viewing video footage of their gaze patterns on the test. The findings will indicate that first, different reading item types will employ different cognitive processes; then different reading patterns for stronger and weaker test takers’on each item types. The implication of this study is to provide recommendations for the use of eye tracking technology in language research.Keywords: eye tracking, reading patterns, test for english majors, cognitive validity
Procedia PDF Downloads 16025801 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 7225800 Application of Pattern Recognition Technique to the Quality Characterization of Superficial Microstructures in Steel Coatings
Authors: H. Gonzalez-Rivera, J. L. Palmeros-Torres
Abstract:
This paper describes the application of traditional computer vision techniques as a procedure for automatic measurement of the secondary dendrite arm spacing (SDAS) from microscopic images. The algorithm is capable of finding the lineal or curve-shaped secondary column of the main microstructure, measuring its length size in a micro-meter and counting the number of spaces between dendrites. The automatic characterization was compared with a set of 1728 manually characterized images, leading to an accuracy of −0.27 µm for the length size determination and a precision of ± 2.78 counts for dendrite spacing counting, also reducing the characterization time from 7 hours to 2 minutes.Keywords: dendrite arm spacing, microstructure inspection, pattern recognition, polynomial regression
Procedia PDF Downloads 4525799 Computer Science, Mass Communications, and Social Entrepreneurship: An Interdisciplinary Approach to Teaching Interactive Storytelling for the Greater Good
Authors: Susan Cardillo
Abstract:
This research will consider ways to bridge the gap between Computer Science and Media Communications and while doing so create Social Entrepreneurship for student success. New Media, as it has been referred to, is considered content available on-demand through Internet, a digital device, usually containing some kind of interactivity and creative participation. It is the interplay between technology, images, media and communications. The next generation of the newspaper, radio, television, and film students need to have a working knowledge of the technologies that are available for the creation of their work and taught to use this knowledge to create a voice. The work is interdisciplinary; in communications, we understand the necessity of reporting and disseminating information. In documentary film we understand the instructional and historic aspects of media and technology and in the non-profit sector, we see the need for expanding outlets for good. So, the true necessity is to utilize ‘new media’ technologies to advance social causes while reporting information, teaching and creating art. Goals: The goal of this research is to give communications students a better understanding of the technology that is both, currently at their disposal, and on the horizon, so that they can use it in their media, communications and art endeavors to be a voice for their generation. There is no longer a need to be a computer scientist to have a working knowledge of communication technologies and how they will benefit our work. There are many free and easy to use applications available for the creation of interactive communications. Methodology: This is Qualitative-Case Study that puts these ideas into action. There is a survey at the end of the experiment that is qualitative in nature and allows for the participants to share ideas and feelings about the technology and approach.Keywords: interactive storytelling, web documentary, mass communications, teaching
Procedia PDF Downloads 28025798 Natural Language News Generation from Big Data
Authors: Bastian Haarmann, Likas Sikorski
Abstract:
In this paper, we introduce an NLG application for the automatic creation of ready-to-publish texts from big data. The fully automatic generated stories have a high resemblance to the style in which the human writer would draw up a news story. Topics may include soccer games, stock exchange market reports, weather forecasts and many more. The generation of the texts runs according to the human language production. Each generated text is unique. Ready-to-publish stories written by a computer application can help humans to quickly grasp the outcomes of big data analyses, save time-consuming pre-formulations for journalists and cater to rather small audiences by offering stories that would otherwise not exist.Keywords: big data, natural language generation, publishing, robotic journalism
Procedia PDF Downloads 43125797 Edge Detection and Morphological Image for Estimating Gestational Age Based on Fetus Length Automatically
Authors: Retno Supriyanti, Ahmad Chuzaeri, Yogi Ramadhani, A. Haris Budi Widodo
Abstract:
The use of ultrasonography in the medical world has been very popular including the diagnosis of pregnancy. In determining pregnancy, ultrasonography has many roles, such as to check the position of the fetus, abnormal pregnancy, fetal age and others. Unfortunately, all these things still need to analyze the role of the obstetrician in the sense of image raised by ultrasonography. One of the most striking is the determination of gestational age. Usually, it is done by measuring the length of the fetus manually by obstetricians. In this study, we developed a computer-aided diagnosis for the determination of gestational age by measuring the length of the fetus automatically using edge detection method and image morphology. Results showed that the system is sufficiently accurate in determining the gestational age based image processing.Keywords: computer aided diagnosis, gestational age, and diameter of uterus, length of fetus, edge detection method, morphology image
Procedia PDF Downloads 29425796 Computer Assisted Instructions for a Better Achievement in and Attitude towards Agricultural Economics
Authors: Abiodun Ezekiel Adesina, Alice M. Olagunju
Abstract:
This study determined the effects of Computer Assisted Instructions (CAI) and Academic Self-Concepts (ASC) on pre-service teachers’ achievement in AE concepts in CoE in Southwest, Nigeria. The study adopted pretest-posttest, control group, quasi-experimental design. Six CoE with e-library facilities were purposively selected. Two hundred and thirty-two intact 200 level Agricultural education students offering introduction to AE course across the six CoE were participants. The participants were assigned to three groups (D&PM, 77, TM, 73 and control, 82). Treatment lasted eight weeks. The AE achievement test (r=0.76), pre-service teachers’ ASC Scale (r=0.81); instructional guides for tutorial (r=0.76), drill and practice (r=0.81) and conventional lecture modes (r=0.83), and teacher performance assessment sheet were used for data collection. Data were analysed using analysis of covariance and Scheffe post-hoc at 0.05 level of significance. The participants were 55.6% female with mean age of 20.8 years. Treatment had significant main effects on pre-service teachers’ achievement (F(2,207)=60.52; η²=0.21; p < 0.05). Participants in D&PM (x̄ =27.83) had the best achievement compared to those in TM (x̄ =25.41) and control (x̄ =18.64) groups. ASC had significant main effect on pre-service teachers’ achievement (F(1,207)=22.011; η²=0.166; p < 0.05). Participants with high ASC (x̄ =27.52) had better achievement compared to those with low ASC (x̄ =22.37). The drill and practice and tutorial instructional modes enhanced students’ achievement in Agricultural Economics concepts. Therefore, the two instructional modes should be adopted for improved learning outcomes in agricultural economics concepts among pre-service teachers.Keywords: achievement in agricultural economics concepts, colleges of education in southwestern Nigeria, computer-assisted instruction, drill and practice instructional mode, tutorial instructional mode
Procedia PDF Downloads 20325795 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement
Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale
Abstract:
In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodman’s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus
Procedia PDF Downloads 51425794 The Methodology of Hand-Gesture Based Form Design in Digital Modeling
Authors: Sanghoon Shim, Jaehwan Jung, Sung-Ah Kim
Abstract:
As the digital technology develops, studies on the TUI (Tangible User Interface) that links the physical environment utilizing the human senses with the virtual environment through the computer are actively being conducted. In addition, there has been a tremendous advance in computer design making through the use of computer-aided design techniques, which enable optimized decision-making through comparison with machine learning and parallel comparison of alternatives. However, a complex design that can respond to user requirements or performance can emerge through the intuition of the designer, but it is difficult to actualize the emerged design by the designer's ability alone. Ancillary tools such as Gaudí's Sandbag can be an instrument to reinforce and evolve emerged ideas from designers. With the advent of many commercial tools that support 3D objects, designers' intentions are easily reflected in their designs, but the degree of their reflection reflects their intentions according to the proficiency of design tools. This study embodies the environment in which the form can be implemented by the fingers of the most basic designer in the initial design phase of the complex type building design. Leapmotion is used as a sensor to recognize the hand motions of the designer, and it is converted into digital information to realize an environment that can be linked in real time in virtual reality (VR). In addition, the implemented design can be linked with Rhino™, a 3D authoring tool, and its plug-in Grasshopper™ in real time. As a result, it is possible to design sensibly using TUI, and it can serve as a tool for assisting designer intuition.Keywords: design environment, digital modeling, hand gesture, TUI, virtual reality
Procedia PDF Downloads 36625793 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 50