Search results for: computer tasks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3743

Search results for: computer tasks

2993 Frequent Item Set Mining for Big Data Using MapReduce Framework

Authors: Tamanna Jethava, Rahul Joshi

Abstract:

Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.

Keywords: frequent item set mining, big data, Hadoop, MapReduce

Procedia PDF Downloads 436
2992 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector

Authors: Aron Witkowski, Andrzej Wodecki

Abstract:

Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.

Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing

Procedia PDF Downloads 49
2991 Deployment of Attack Helicopters in Conventional Warfare: The Gulf War

Authors: Mehmet Karabekir

Abstract:

Attack helicopters (AHs) are usually deployed in conventional warfare to destroy armored and mechanized forces of enemy. In addition, AHs are able to perform various tasks in the deep, and close operations – intelligence, surveillance, reconnaissance, air assault operations, and search and rescue operations. Apache helicopters were properly employed in the Gulf Wars and contributed the success of campaign by destroying a large number of armored and mechanized vehicles of Iraq Army. The purpose of this article is to discuss the deployment of AHs in conventional warfare in the light of Gulf Wars. First, the employment of AHs in deep and close operations will be addressed regarding the doctrine. Second, the US armed forces AH-64 doctrinal and tactical usage will be argued in the 1st and 2nd Gulf Wars.

Keywords: attack helicopter, conventional warfare, gulf wars

Procedia PDF Downloads 473
2990 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry

Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie

Abstract:

This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.

Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding

Procedia PDF Downloads 164
2989 Authentication and Legal Admissibility of 'Computer Evidence from Electronic Voting Machines' in Electoral Litigation: A Qualitative Legal Analysis of Judicial Opinions of Appellate Courts in the USA

Authors: Felix O. Omosele

Abstract:

Several studies have established that electronic voting machines are prone to multi-faceted challenges. One of which is their capacity to lose votes after the ballots might have been cast. Therefore, the international consensus appears to favour the use of electronic voting machines that are accompanied with verifiable audit paper audit trail (VVPAT). At present, there is no known study that has evaluated the impacts (or otherwise) of this verification and auditing on the authentication, admissibility and evidential weight of electronically-obtained electoral data. This legal inquiry is important as elections are sometimes won or lost in courts and on the basis of such data. This gap will be filled by the present research work. Using the United States of America as a case study, this paper employed a qualitative legal analysis of several of its appellate courts’ judicial opinions. This analysis equally unearths the necessary statutory rules and regulations that are important to the research problem. The objective of the research is to highlight the roles played by VVPAT on electoral evidence- as seen from the eyes of the court. The preliminary outcome of this qualitative analysis shows that the admissibility and weight attached to ‘Computer Evidence from e-voting machines (CEEM)’ are often treated with general standards applied to other computer-stored evidence. These standards sometimes fail to embrace the peculiar challenges faced by CEEM, particularly with respect to their tabulation and transmission. This paper, therefore, argues that CEEM should be accorded unique consideration by courts. It proposes the development of a legal standard which recognises verification and auditing as ‘weight enhancers’ for electronically-obtained electoral data.

Keywords: admissibility of computer evidence, electronic voting, qualitative legal analysis, voting machines in the USA

Procedia PDF Downloads 196
2988 Synchronous Courses Attendance in Distance Higher Education: Case Study of a Computer Science Department

Authors: Thierry Eude

Abstract:

The use of videoconferencing platforms adapted to teaching offers students the opportunity to take distance education courses in much the same way as traditional in-class training. The sessions can be recorded and they allow students the option of following the courses synchronously or asynchronously. Three typical profiles can then be distinguished: students who choose to follow the courses synchronously, students who could attend the course in synchronous mode but choose to follow the session off-line, and students who follow the course asynchronously as they cannot attend the course when it is offered because of professional or personal constraints. Our study consists of observing attendance at all distance education courses offered in the synchronous mode by the Computer Science and Software Engineering Department at Laval University during 10 consecutive semesters. The aim is to identify factors that influence students in their choice of attending the distance courses in synchronous mode. It was found that participation tends to be relatively stable over the years for any one semester (fall, winter summer) and is similar from one course to another, although students may be increasingly familiar with the synchronous distance education courses. Average participation is around 28%. There may be deviations, but they concern only a few courses during certain semesters, suggesting that these deviations would only have occurred because of the composition of particular promotions during specific semesters. Furthermore, course schedules have a great influence on the attendance rate. The highest rates are all for courses which are scheduled outside office hours.

Keywords: attendance, distance undergraduate education in computer science, student behavior, synchronous e-learning

Procedia PDF Downloads 284
2987 Insight2OSC: Using Electroencephalography (EEG) Rhythms from the Emotiv Insight for Musical Composition via Open Sound Control (OSC)

Authors: Constanza Levicán, Andrés Aparicio, Rodrigo F. Cádiz

Abstract:

The artistic usage of Brain-computer interfaces (BCI), initially intended for medical purposes, has increased in the past few years as they become more affordable and available for the general population. One interesting question that arises from this practice is whether it is possible to compose or perform music by using only the brain as a musical instrument. In order to approach this question, we propose a BCI for musical composition, based on the representation of some mental states as the musician thinks about sounds. We developed software, called Insight2OSC, that allows the usage of the Emotiv Insight device as a musical instrument, by sending the EEG data to audio processing software such as MaxMSP through the OSC protocol. We provide two compositional applications bundled with the software, which we call Mapping your Mental State and Thinking On. The signals produced by the brain have different frequencies (or rhythms) depending on the level of activity, and they are classified as one of the following waves: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-50 Hz). These rhythms have been found to be related to some recognizable mental states. For example, the delta rhythm is predominant in a deep sleep, while beta and gamma rhythms have higher amplitudes when the person is awake and very concentrated. Our first application (Mapping your Mental State) produces different sounds representing the mental state of the person: focused, active, relaxed or in a state similar to a deep sleep by the selection of the dominants rhythms provided by the EEG device. The second application relies on the physiology of the brain, which is divided into several lobes: frontal, temporal, parietal and occipital. The frontal lobe is related to abstract thinking and high-level functions, the parietal lobe conveys the stimulus of the body senses, the occipital lobe contains the primary visual cortex and processes visual stimulus, the temporal lobe processes auditory information and it is important for memory tasks. In consequence, our second application (Thinking On) processes the audio output depending on the users’ brain activity as it activates a specific area of the brain that can be measured using the Insight device.

Keywords: BCI, music composition, emotiv insight, OSC

Procedia PDF Downloads 322
2986 Effectiveness of Computer Video Games on the Levels of Anxiety of Children Scheduled for Tooth Extraction

Authors: Marji Umil, Miane Karyle Urolaza, Ian Winston Dale Uy, John Charle Magne Valdez, Karen Elizabeth Valdez, Ervin Charles Valencia, Cheryleen Tan-Chua

Abstract:

Objective: Distraction techniques can be successful in reducing the anxiety of children during medical procedures. Dental procedures, in particular, are associated with dental anxiety which has been identified as a significant and common problem in children, however, only limited studies were conducted to address such problem. Thus, this study determined the effectiveness of computer video games on the levels of anxiety of children between 5-12 years old scheduled for tooth extraction. Methods: A pre-test post-test quasi-experimental study was conducted involving 30 randomly-assigned subjects, 15 in the experimental and 15 in the control. Subjects in the experimental group played computer video games for a maximum of 15 minutes, however, no intervention was done on the control. The modified Yale Pre-operative Anxiety Scale (m-YPAS) with a Cronbach’s alpha of 0.9 was used to assess anxiety at two different points: upon arrival in the clinic (pre-test anxiety) and 15 minutes after the first measurement (post-test anxiety). Paired t-test and ANCOVA were used to analyze the gathered data. Results: Results showed that there is a significant difference between the pre-test and post-test anxiety scores of the control group (p=0.0002) which indicates an increased anxiety. A significant difference was also noted between the pre-test and post-test anxiety scores of the experimental group (p=0.0002) which indicates decreased anxiety. Comparatively, the experimental group showed lower anxiety score (p=<0.0001) than the control. Conclusion: The use of computer video games is effective in reducing the pre-operative anxiety among children and can be an alternative non-pharmacological management in giving pre-operative care.

Keywords: play therapy, preoperative anxiety, tooth extraction, video games

Procedia PDF Downloads 452
2985 A Question of Ethics and Faith

Authors: Madhavi-Priya Singh, Liam Lowe, Farouk Arnaout, Ludmilla Pillay, Giordan Perez, Luke Mischker, Steve Costa

Abstract:

An Emergency Department consultant identified the failure of medical students to complete the task of clerking a patient in its entirety. As six medical students on our first clinical placement, we recognised our own failure and endeavoured to examine why this failure was consistent among all medical students that had been given this task, despite our best motivations as adult learner. Our aim is to understand and investigate the elements which impeded our ability to learn and perform as medical students in the clinical environment, with reference to the prescribed task. We also aim to generate a discussion around the delivery of medical education with potential solutions to these barriers. Six medical students gathered together to have a comprehensive reflective discussion to identify possible factors leading to the failure of the task. First, we thoroughly analysed the delivery of the instructions with reference to the literature to identify potential flaws. We then examined personal, social, ethical, and cultural factors which may have impacted our ability to complete the task in its entirety. Through collation of our shared experiences, with support from discussion in the field of medical education and ethics, we identified two major areas that impacted our ability to complete the set task. First, we experienced an ethical conflict where we believed the inconvenience and potential harm inflicted on patients did not justify the positive impact the patient interaction would have on our medical learning. Second, we identified a lack of confidence stemming from multiple factors, including the conflict between preclinical and clinical learning, perceptions of perfectionism in the culture of medicine, and the influence of upward social comparison. After discussions, we found that the various factors we identified exacerbated the fears and doubts we already had about our own abilities and that of the medical education system. This doubt led us to avoid completing certain aspects of the tasks that were prescribed and further reinforced our vulnerability and perceived incompetence. Exploration of philosophical theories identified the importance of the role of doubt in education. We propose the need for further discussion around incorporating both pedagogic and andragogic teaching styles in clinical medical education and the acceptance of doubt as a driver of our learning. Doubt will continue to permeate our thoughts and actions no matter what. The moral or psychological distress that arises from this is the key motivating factor for our avoidance of tasks. If we accept this doubt and education embraces this doubt, it will no longer linger in the shadows as a negative and restrictive emotion but fuel a brighter dialogue and positive learning experience, ultimately assisting us in achieving our full potential.

Keywords: medical education, clinical education, andragogy, pedagogy

Procedia PDF Downloads 129
2984 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 87
2983 Munting Kamay, Munting Gawa: Children's Development Training, a UCU Experience

Authors: Elizabeth A. Montero

Abstract:

The project contemplated in this study particularly aimed at enabling public school children of ages ten to twelve who belong to low and middle income families. The pupils were provided training on communication, work, computer and social skills. In this study, the researcher hypothesized that children given the opportunity to develop a skill through guidance and proper supervision will significantly learn, improve and develop a skill. Since children’s minds are highly absorbent like a sponge absorbing anything within its capacity to take, it is ideal and necessary that education should provide an environment that is rich offering an array of meaningful experiences. The context of this study is well balanced since it catered to the children’s communication, work, computer and social skills.

Keywords: Munting Kamay, Munting Gawa, children’s development training, UCU experience

Procedia PDF Downloads 437
2982 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 437
2981 Arguments against Innateness of Theory of Mind

Authors: Arkadiusz Gut, Robert Mirski

Abstract:

The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.

Keywords: development, false belief, mindreading, nativism, theory of mind

Procedia PDF Downloads 210
2980 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 158
2979 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 509
2978 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 100
2977 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK

Authors: Esther Ekundayo

Abstract:

The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.

Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology

Procedia PDF Downloads 97
2976 Stochastic Simulation of Random Numbers Using Linear Congruential Method

Authors: Melvin Ballera, Aldrich Olivar, Mary Soriano

Abstract:

Digital computers nowadays must be able to have a utility that is capable of generating random numbers. Usually, computer-generated random numbers are not random given predefined values such as starting point and end points, making the sequence almost predictable. There are many applications of random numbers such business simulation, manufacturing, services domain, entertainment sector and other equally areas making worthwhile to design a unique method and to allow unpredictable random numbers. Applying stochastic simulation using linear congruential algorithm, it shows that as it increases the numbers of the seed and range the number randomly produced or selected by the computer becomes unique. If this implemented in an environment where random numbers are very much needed, the reliability of the random number is guaranteed.

Keywords: stochastic simulation, random numbers, linear congruential algorithm, pseudorandomness

Procedia PDF Downloads 316
2975 Automated Server Configuration Management using Ansible

Authors: Kartik Mahajan

Abstract:

DevOps methodologies streamline software development and operations, promoting collaboration and automation. Traditional server management often relies on manual, repetitive tasks, leading to inefficiencies, potential errors, and increased operational costs. Ansible, as a configuration management tool, presents a compelling solution for automating infrastructure management processes. This review paper explores the implementation and testing of Ansible for server management, specifically focusing on automated user account configuration. By replacing manual procedures with Ansible playbooks, we aim to optimize server management, reduce human error, and potentially mitigate operational expenses. This study offers insights into Ansible’s efficacy within a DevOps context, highlighting its potential to transform server administration practices.

Keywords: cloud, Devops, automation, ansible

Procedia PDF Downloads 44
2974 Artificial Intelligence in Duolingo

Authors: Jwana Khateeb, Lamar Bawazeer, Hayat Sharbatly, Mozoun Alghamdi

Abstract:

This research paper explores the idea of learning new languages through an innovative-mobile based learning technology. Throughout this paper we will discuss and examine a mobile-based application called Duolingo. Duolingo is a college standard application for learning foreign languages such as Spanish and English. It is a smart application where it uses smart adaptive technologies to advance the level of their students at each period of time by offering new tasks. Furthermore, we will discuss the history of the application and the methodology used within it. We have conducted a study in which we surveyed ten people about their experience using Duolingo. The results are examined and analyzed in which it indicates the effectiveness on Duolingo students who are seeking to learn new languages. Thus, the research paper will furthermore discuss the diverse methods and approaches in learning new languages through this mobile-based application.

Keywords: Duolingo, AI, personalized, customized

Procedia PDF Downloads 289
2973 Temperamental Determinants of Eye-Hand Coordination Formation in the Special Aerial Gymnastics Instruments (SAGI)

Authors: Zdzisław Kobos, Robert Jędrys, Zbigniew Wochyński

Abstract:

Motor activity and good health are sine qua non determinants of a proper practice of the profession, especially aviation. Therefore, candidates to the aviation are selected according their psychomotor ability by both specialist medical commissions. Moreover, they must past an examination of the physical fitness. During the studies in the air force academy, eye-hand coordination is formed in two stages. The future aircraft pilots besides all-purpose physical education must practice specialist training on SAGI. Training includes: looping, aerowheel, and gyroscope. Aim of the training on the above listed apparatuses is to form eye-hand coordination during the tasks in the air. Such coordination is necessary to perform various figures in the real flight. Therefore, during the education of the future pilots, determinants of the effective ways of this important parameter of the human body functioning are sought for. Several studies of the sport psychology indicate an important role of the temperament as a factor determining human behavior during the task performance and acquiring operating skills> Polish psychologist Jan Strelau refers to the basic, relatively constant personality features which manifest themselves in the formal characteristics of the human behavior. Temperament, being initially determined by the inborn physiological mechanisms, changes in the course of maturation and some environmental factors and concentrates on the energetic level and reaction characteristics in time. Objectives. This study aimed at seeking a relationship between temperamental features and eye-hand coordination formation during training on SAGI. Material and Methods: Group of 30 students of pilotage was examined in two situations. The first assessment of the eye-hand coordination level was carried out before the beginning of a 30-hour training on SAGI. The second assessment was carried out after training completion. Training lasted for 2 hours once a week. Temperament was evaluated with The Formal Characteristics of Behavior − Temperament Inventory (FCB-TI) developed by Bogdan Zawadzki and Jan Strelau. Eye-hand coordination was assessed with a computer version of the Warsaw System of Psychological Tests. Results: It was found that the training on SAGI increased the level of eye-hand coordination in the examined students. Conclusions: Higher level of the eye-hand coordination was obtained after completion of the training. Moreover, a relationship between eye-hand coordination level and selected temperamental features was statistically significant.

Keywords: temperament, eye-hand coordination, pilot, SAGI

Procedia PDF Downloads 440
2972 Enhancing Knowledge and Teaching Skills of Grade Two Teachers who Work with Children at Risk of Dyslexia

Authors: Rangika Perera, Shyamani Hettiarachchi, Fran Hagstrom

Abstract:

Dyslexia is the most common reading reading-related difficulty among the school school-aged population and currently, 5-10% are showing the features of dyslexia in Sri Lanka. As there is an insufficient number of speech and language pathologists in the country and few speech and language pathologists working in government mainstream school settings, these children who are at risk of dyslexia are not receiving enough quality early intervention services to develop their reading skills. As teachers are the key professionals who are directly working with these children, using them as the primary facilitators to improve their reading skills will be the most effective approach. This study aimed to identify the efficacy of a two and half a day of intensive training provided to fifteen mainstream government school teachers of grade two classes. The goal of the training was to enhance their knowledge of dyslexia and provide full classroom skills training that could be used to support the development of the students’ reading competencies. A closed closed-ended multiple choice questionnaire was given to these teachers pre and -post-training to measure teachers’ knowledge of dyslexia, the areas in which these children needed additional support, and the best strategies to facilitate reading competencies. The data revealed that the teachers’ knowledge in all areas was significantly poorer prior to the training and that there was a clear improvement in all areas after the training. The gain in target areas of teaching skills selected to improve the reading skills of children was evaluated through peer feedback. Teachers were assigned to three groups and expected to model how they were going to introduce the skills in recommended areas using researcher developed, validated and reliability reliability-tested materials and the strategies which were introduced during the training within the given tasks. Peers and the primary investigator rated teachers’ performances and gave feedback on organizational skills, presentation skills of materials, clarity of instruction, and appropriateness of vocabulary. After modifying their skills according to the feedback the teachers received, they were expected to modify and represent the same tasks to the group the following day. Their skills were re-evaluated by the peers and primary investigator using the same rubrics to measure the improvement. The findings revealed a significant improvement in their teaching skills development. The data analysis of both knowledge and skills gains of the teachers was carried out using quantitative descriptive data analysis. The overall findings of the study yielded promising results that support intensive training as a method for improving teachers’ knowledge and teaching skill development for use with children in a whole class intervention setting who are at risk of dyslexia.

Keywords: Dyslexia, knowledge, teaching skills, training program

Procedia PDF Downloads 73
2971 Evaluation and Assessment of Bioinformatics Methods and Their Applications

Authors: Fatemeh Nokhodchi Bonab

Abstract:

Bioinformatics, in its broad sense, involves application of computer processes to solve biological problems. A wide range of computational tools are needed to effectively and efficiently process large amounts of data being generated as a result of recent technological innovations in biology and medicine. A number of computational tools have been developed or adapted to deal with the experimental riches of complex and multivariate data and transition from data collection to information or knowledge. These bioinformatics tools are being evaluated and applied in various medical areas including early detection, risk assessment, classification, and prognosis of cancer. The goal of these efforts is to develop and identify bioinformatics methods with optimal sensitivity, specificity, and predictive capabilities. The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems.

Keywords: methods, applications, transcriptional regulatory systems, techniques

Procedia PDF Downloads 127
2970 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering

Authors: Heikki Valmu, Eero Kupila, Raisa Vartia

Abstract:

A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.

Keywords: continuous assessment, course integration, curricular reform, student feedback

Procedia PDF Downloads 203
2969 The Impact of Information and Communication Technology on Learning Quality and Conceptual Change in Moroccan High School Students

Authors: Azzeddine Atibi, Khadija El Kababi, Salim Ahmed, Mohamed Radid

Abstract:

Teaching and learning occupy a significant position globally, as the sustainable development of all sectors is intrinsically linked to the improvement of the educational system. The COVID-19 pandemic demonstrated that the integration of Information and Communication Technology (ICT) in the learning process is not optional but essential, and that proficiency in computer tools is an asset that will enhance pedagogy and ensure the continuity of learning under any circumstances. The objective of our study is to evaluate the impact of introducing computer tools on the quality of learning and the realization of conceptual change in learners. To this end, a learning situation was meticulously prepared, targeting first-year baccalaureate students in experimental sciences at a public high school, "Khadija Oum Almouminin," focusing on the chapter on glycemia regulation in the Moroccan Life and Earth Sciences (LES) curriculum. The learning situation was implemented with a pilot group that utilized computer tools and a control group that studied the same chapter without using ICT. The analysis and comparison of the results allowed us to verify the research question posed and to propose perspectives to ensure conceptual change in learners.

Keywords: information and communication technology, conceptual change, continuity of learning, life and earth sciences, glycemia regulation

Procedia PDF Downloads 38
2968 Induction Heating Process Design Using Comsol® Multiphysics Software Version 4.2a

Authors: K. Djellabi, M. E. H. Latreche

Abstract:

Induction heating computer simulation is a powerful tool for process design and optimization, induction coil design, equipment selection, as well as education and business presentations. The authors share their vast experience in the practical use of computer simulation for different induction heating and heat treating processes. In this paper deals with mathematical modeling and numerical simulation of induction heating furnaces with axisymmetric geometries. For the numerical solution, we propose finite element methods combined with boundary (FEM) for the electromagnetic model using COMSOL® Multiphysics Software. Some numerical results for an industrial furnace are shown with high frequency.

Keywords: numerical methods, induction furnaces, induction heating, finite element method, Comsol multiphysics software

Procedia PDF Downloads 449
2967 Time-Domain Simulations of the Coupled Dynamics of Surface Riding Wave Energy Converter

Authors: Chungkuk Jin, Moo-Hyun Kim, HeonYong Kang

Abstract:

A surface riding (SR) wave energy converter (WEC) is designed and its feasibility and performance are numerically simulated by the author-developed floater-mooring-magnet-electromagnetics fully-coupled dynamic analysis computer program. The biggest advantage of the SR-WEC is that the performance is equally effective even in low sea states and its structural robustness is greatly improved by simply riding along the wave surface compared to other existing WECs. By the numerical simulations and actuator testing, it is clearly demonstrated that the concept works and through the optimization process, its efficiency can be improved.

Keywords: computer simulation, electromagnetics fully-coupled dynamics, floater-mooring-magnet, optimization, performance evaluation, surface riding, WEC

Procedia PDF Downloads 145
2966 Effects of Exposing Learners to Speech Acts in the German Teaching Material Schritte International: The Case of Requests

Authors: Wan-Lin Tsai

Abstract:

Speech act of requests is an important issue in the field of language learning and teaching because we cannot avoid making requesting in our daily life. This study examined whether or not the subjects who were freshmen and majored in German at Wenzao University of Languages were able to use the linguistic forms which they had learned from their course book Schritte International to make appropriate requests through dialogue completed tasks (DCT). The results revealed that the majority of the subjects were unable to use the forms to make appropriate requests in German due to the lack of explicit instructions. Furthermore, Chinese interference was observed in students' productions. Explicit instructions in speech acts are strongly recommended.

Keywords: Chinese interference, German pragmatics, German teaching, make appropriate requests in German, speech act of requesting

Procedia PDF Downloads 465
2965 Partner Selection for Innovation Projects Related to New Product Concept Design

Authors: Odd Jarl Borch, Marina Z. Solesvik

Abstract:

The paper analyses partner selection approaches related to large scale R&D-based innovation projects at the different stages of development. We emphasize innovation projects in the maritime value chain and how partners are selected to improve quality according to high spec customer demands, and to reduce investment costs on new production technology such as advanced offshore service vessels. We elaborate on the differences in innovation approach and especially the role that purposive inflows and outflows of knowledge from external partners may be used to accelerate internal innovation. We present three cases related to different projects in terms of specificity and scope. We explore how the partner selection criteria change over time when the goals move from wide scope to a very specific R&D tasks.

Keywords: partner selection, innovation, offshore industry, concept design

Procedia PDF Downloads 515
2964 Analysis of The Effect about Different Automatic Sprinkler System Extinguishing The Scooter Fire in Underground Parking Space

Authors: Yu-Hsiu Li, Chun-Hsun Chen

Abstract:

Analysis of automatic sprinkler system protects the scooter in underground parking space, the current of general buildings is mainly equipped with foam fire-extinguishing equipment in Taiwan, the automatic sprinkling system has economic and environmental benefits, even high stability, China and the United States allow the parking space to set the automatic sprinkler system under certain conditions. The literature about scooter full-scale fire indicates that the average fire growth coefficient is 0.19 KW/sec2, it represents the scooter fire is classified as ultra-fast time square fire growth model, automatic sprinkler system can suppress the flame height and prevent extending burning. According to the computer simulation (FDS) literature, no matter computer simulation or full-scale experiments, the active order and trend about sprinkler heads are the same. This study uses the computer simulation program (FDS), the simulation scenario designed includes using a different system (enclosed wet type and open type), and different configurations. The simulation result demonstrates that the open type requires less time to extinguish the fire than the enclosed wet type if the horizontal distance between the sprinkler and the scooter ignition source is short, the sprinkler can act quickly, the heat release rate of fire can be suppressed in advance.

Keywords: automatic sprinkler system, underground parking Spac, FDS, scooter fire extinguishing

Procedia PDF Downloads 142