Search results for: computer use
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2361

Search results for: computer use

561 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections

Authors: Anthony D. Rhodes, Manan Goel

Abstract:

We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.

Keywords: computer vision, object segmentation, interactive segmentation, model compression

Procedia PDF Downloads 120
560 The Correlation Between Self-Talk and COVID-19

Authors: Abigail Vallance

Abstract:

Current research shows a correlation between declining mental health in the United States and the effect of COVID-19 on young adults and adolescents. Anxiety and depression are the two most common psychiatric illnesses, which are also the leading impediments to academic success. Spending six hours a day or more using computers is associated with higher risks of depression, with this time constraint pervasive even in present-day academia. Along with many hours on the computer, common issues COVID-19 had on students’ academic performance during online school included technical difficulties, poor support services, and difficulty adapting to online learning. Given the volume of requirements with unrealistic deadlines, and despite experiencing COVID-19, students showed an increase in their levels of anxiety. Besides the prevalent effect of COVID-19 on mental health, many studies show a correlation between mental health, COVID-19, academia, and sports performance. Academic research showed that negative self-talk, in relation to one’s self-efficacy, correlated with negative academic performance. Research showed that students who reported negative self-efficacy when test-taking led to negative test results. Furthermore, in sports performance, negative effects were found when athletes engage in negative self-talk. Overall, motivational self-talk, by oneself and through teammates and coaches, correlated with better performance than regular self-talk in sports. In relation to sports performance, the COVID-19 pandemic canceled complete sports seasons for millions of adolescents across the country. Many student-athletes use their sport to release emotions and escape from their mental health, but this was taken away. The purpose of this study is to address the current increase in mental health diagnoses in adolescents, including suicide rates after the COVID-19 pandemic began in 2020.This literature analysis is actively being studied.

Keywords: self-talk, COVID-19, mental health, adolescents

Procedia PDF Downloads 56
559 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 166
558 Application of Industrial Ergonomics in Vehicle Service System Design

Authors: Zhao Yu, Zhi-Nan Zhang

Abstract:

More and more interactive devices are used in the transportation service system. Our mobile phones, on-board computers, and Head-Up Displays (HUDs) can all be used as the tools of the in-car service system. People can access smart systems with different terminals such as mobile phones, computers, pads and even their cars and watches. Different forms of terminals bring the different quality of interaction by the various human-computer Interaction modes. The new interactive devices require good ergonomics design at each stage of the whole design process. According to the theory of human factors and ergonomics, this paper compared three types of interactive devices by four driving tasks. Forty-eight drivers were chosen to experience these three interactive devices (mobile phones, on-board computers, and HUDs) by a simulate driving process. The subjects evaluated ergonomics performance and subjective workload after the process. And subjects were encouraged to support suggestions for improving the interactive device. The result shows that different interactive devices have different advantages in driving tasks, especially in non-driving tasks such as information and entertainment fields. Compared with mobile phones and onboard groups, the HUD groups had shorter response times in most tasks. The tasks of slow-up and the emergency braking are less accurate than the performance of a control group, which may because the haptic feedback of these two tasks is harder to distinguish than the visual information. Simulated driving is also helpful in improving the design of in-vehicle interactive devices. The paper summarizes the ergonomics characteristics of three in-vehicle interactive devices. And the research provides a reference for the future design of in-vehicle interactive devices through an ergonomic approach to ensure a good interaction relationship between the driver and the in-vehicle service system.

Keywords: human factors, industrial ergonomics, transportation system, usability, vehicle user interface

Procedia PDF Downloads 139
557 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.

Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics

Procedia PDF Downloads 124
556 The Proton Flow Battery for Storing Renewable Energy: A Theoretical Model of Electrochemical Hydrogen Storage in an Activated Carbon Electrode

Authors: Sh. Heidari, A. J. Andrews, A. Oberoi

Abstract:

Electrochemical storage of hydrogen in activated carbon electrodes as part of a reversible fuel cell offers a potentially attractive option for storing surplus electrical energy from inherently variable solar and wind energy resources. Such a system – which we have called a proton flow battery – promises to have a roundtrip energy efficiency comparable to lithium ion batteries, while having higher gravimetric and volumetric energy densities. In this paper, a theoretical model is presented of the process of H+ ion (proton) conduction through an acid electrolyte into a highly porous activated carbon electrode where it is neutralised and absorbed on the inner surfaces of pores. A Butler-Volmer type equation relates the rate of adsorption to the potential difference between the activated carbon surface and the electrolyte. This model for the hydrogen storage electrode is then incorporated into a more general computer model based on MATLAB software of the entire electrochemical cell including the oxygen electrode. Hence a theoretical voltage-current curve is generated for given input parameters for a particular activated carbon electrode. It is shown that theoretical VI curves produced by the model can be fitted accurately to experimental data from an actual electrochemical cell with the same characteristics. By obtaining the best-fit values of input parameters, such as the exchange current density and charge transfer coefficient for the hydrogen adsorption reaction, an improved understanding of the adsorption reaction is obtained. This new model will assist in designing improved proton flow batteries for storing solar and wind energy.

Keywords: electrochemical hydrogen storage, proton flow battery, butler-volmer equation, activated carbon

Procedia PDF Downloads 500
555 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 316
554 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.

Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number

Procedia PDF Downloads 363
553 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems

Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer

Abstract:

This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.

Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control

Procedia PDF Downloads 153
552 A Novel Methodology for Browser Forensics to Retrieve Searched Keywords from Windows 10 Physical Memory Dump

Authors: Dija Sulekha

Abstract:

Nowadays, a good percentage of reported cybercrimes involve the usage of the Internet, directly or indirectly for committing the crime. Usually, Web Browsers leave traces of browsing activities on the host computer’s hard disk, which can be used by investigators to identify internet-based activities of the suspect. But criminals, who involve in some organized crimes, disable browser file generation feature to hide the evidence while doing illegal activities through the Internet. In such cases, even though browser files were not generated in the storage media of the system, traces of recent and ongoing activities were generated in the Physical Memory of the system. As a result, the analysis of Physical Memory Dump collected from the suspect's machine retrieves lots of forensically crucial information related to the browsing history of the Suspect. This information enables the cyber forensic investigators to concentrate on a few highly relevant selected artefacts while doing the Offline Forensics analysis of storage media. This paper addresses the reconstruction of web browsing activities by conducting live forensics to identify searched terms, downloaded files, visited sites, email headers, email ids, etc. from the physical memory dump collected from Windows 10 Systems. Well-known entry points are available for retrieving all the above artefacts except searched terms. The paper describes a novel methodology to retrieve the searched terms from Windows 10 Physical Memory. The searched terms retrieved in this way can be used for doing advanced file and keyword search in the storage media files reconstructed from the file system recovery in offline forensics.

Keywords: browser forensics, digital forensics, live Forensics, physical memory forensics

Procedia PDF Downloads 116
551 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead

Authors: Behrooz Daneshmand

Abstract:

Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.

Keywords: software-defined networking, security, SDN, 5G/IMT-2020

Procedia PDF Downloads 99
550 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 96
549 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs

Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro

Abstract:

The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.

Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback

Procedia PDF Downloads 68
548 Greenhouse Controlled with Graphical Plotting in Matlab

Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria

Abstract:

This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.

Keywords: greenhouse, microcontroller, temperature, control, MATLAB

Procedia PDF Downloads 402
547 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 43
546 A Gamification Teaching Method for Software Measurement Process

Authors: Lennon Furtado, Sandro Oliveira

Abstract:

The importance of an effective measurement program lies in the ability to control and predict what can be measured. Thus, the measurement program has the capacity to provide bases in decision-making to support the interests of an organization. Therefore, it is only possible to apply for an effective measurement program with a team of software engineers well trained in the measurement area. However, the literature indicates that are few computer science courses that have in their program the teaching of the software measurement process. And even these, generally present only basic theoretical concepts of said process and little or no measurement in practice, which results in the student's lack of motivation to learn the measurement process. In this context, according to some experts in software process improvements, one of the most used approaches to maintaining the motivation and commitment to software process improvements program is the use of the gamification. Therefore, this paper aims to present a proposal of teaching the measurement process by gamification. Which seeks to improve student motivation and performance in the assimilation of tasks related to software measurement, by incorporating elements of games into the practice of measurement process, making it more attractive for learning. And as a way of validating the proposal will be made a comparison between two distinct groups of 20 students of Software Quality class, a control group, and an experiment group. The control group will be the students that will not make use of the gamification proposal to learn software measurement process, while the experiment group, will be the students that will make use of the gamification proposal to learn software measurement process. Thus, this paper will analyze the objective and subjective results of each group. And as objective result will be analyzed the student grade reached at the end of the course, and as subjective results will be analyzed a post-course questionnaire with the opinion of each student about the teaching method. Finally, this paper aims to prove or refute the following hypothesis: If the gamification proposal to teach software measurement process does appropriate motivate the student, in order to attribute the necessary competence to the practical application of the measurement process.

Keywords: education, gamification, software measurement process, software engineering

Procedia PDF Downloads 314
545 Advanced Analysis on Dissemination of Pollutant Caused by Flaring System Effect Using Computational Fluid Dynamics (CFD) Fluent Model with WRF Model Input in Transition Season

Authors: Benedictus Asriparusa

Abstract:

In the area of the oil industry, there is accompanied by associated natural gas. The thing shows that a large amount of energy is being wasted mostly in the developing countries by contributing to the global warming process. This research represents an overview of methods in Minas area employed by these researchers in PT. Chevron Pacific Indonesia to determine ways of measuring and reducing gas flaring and its emission drastically. It provides an approximation includes analytical studies, numerical studies, modeling, computer simulations, etc. Flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process will release emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the air and environment around the industrial area. Therefore, we need a simulation to create the pattern of the dissemination of pollutant. This research paper has being made to see trends in gas flaring model and current developments to predict dominant variable which gives impact to dissemination of pollutant. Fluent models used to simulate the distribution of pollutant gas coming out of the stack. While WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. This study condition focused on transition season in 2012 at Minas area. The goal of the simulation is looking for the exact time which is most influence towards dissemination of pollutants. The most influence factor divided into two main subjects. It is the quickest wind and the slowest wind. According to the simulation results, it can be seen that quickest wind moves to horizontal way and slowest wind moves to vertical way.

Keywords: flaring system, fluent model, dissemination of pollutant, transition season

Procedia PDF Downloads 380
544 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy

Authors: J. Alvarez, J. Pimienta, R. Sarmiento

Abstract:

Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.

Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure

Procedia PDF Downloads 227
543 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 66
542 Challenges of Translation Knowledge for Pediatric Rehabilitation Technology

Authors: Patrice L. Weiss, Barbara Mazer, Tal Krasovsky, Naomi Gefen

Abstract:

Knowledge translation (KT) involves the process of applying the most promising research findings to practical settings, ensuring that new technological discoveries enhance healthcare accessibility, effectiveness, and accountability. This perspective paper aims to discuss and provide examples of how the KT process can be implemented during a time of rapid advancement in rehabilitation technologies, which have the potential to greatly influence pediatric healthcare. The analysis is grounded in a comprehensive systematic review of literature, where key studies from the past 34 years were carefully interpreted by four expert researchers in scientific and clinical fields. This review revealed both theoretical and practical insights into the factors that either facilitate or impede the successful implementation of new rehabilitation technologies. By utilizing the Knowledge-to-Action cycle, which encompasses the knowledge creation funnel and the action cycle, we demonstrated its application in integrating advanced technologies into clinical practice and guiding healthcare policy adjustments. We highlighted three successful technology applications: powered mobility, head support systems, and telerehabilitation. Moreover, we investigated emerging technologies, such as brain-computer interfaces and robotic assistive devices, which face challenges related to cost, durability, and usability. Recommendations include prioritizing early and ongoing design collaborations, transitioning from research to practical implementation, and determining the optimal timing for clinical adoption of new technologies. In conclusion, this paper informs, justifies, and strengthens the knowledge translation process, ensuring it remains relevant, rigorous, and significantly contributes to pediatric rehabilitation and other clinical fields.

Keywords: knowledge translation, rehabilitation technology, pediatrics, barriers, facilitators, stakeholders

Procedia PDF Downloads 21
541 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: risk management, risk identification, risk analysis, analytic hierarchical process

Procedia PDF Downloads 118
540 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 70
539 Linguistics and Islamic Studies in Historical Perspective: The Case of Interdisciplinary Communication

Authors: Olga Bernikova, Oleg Redkin

Abstract:

Islamic Studies and the Arabic language are indivisible from each other starting from the appearance of Islam and formation of the Classical language. The present paper demonstrates correlation among linguistics and religion in historical perspective with regard to peculiarities of the Arabic language which distinguish it from the other prophetic languages. Islamic Studies and Linguistics are indivisible from each other starting from the invent of Islam and formation of the Classical language. In historical perspective, the Arabic language has been and remains a tool for the expression of Islamic rhetoric being a prophetic language. No other language in the world has preserved its stability for more than 14 centuries. Islam is considered to be one of the most important factors which secure this stability. The analysis and study of the text of Qurʾān are of special importance for those who study Islamic civilization, its role in the destinies of the mankind, its values and virtues. Without understanding of the polyphony of this sacred text, indivisible unity of its form and content it is impossible to understand social developments both in the present and the past. Since the first years of Islam Qurʾān had been in the center of attention of Muslim scholars, and in the center of attention of theologians, historians, philologists, jurists, mathematicians. Only quite recently it has become an object of analysis of the specialists of computer technologies. In Arabic and Islamic studies mediaeval texts i.e. textual documents are considered the main source of information. Hence the analysis of the multiplicity of various texts and finding of interconnections between them help to set scattered fragments of the riddle into a common and eloquent picture of the past, which reflects the state of the society on certain stages of its development. The text of the Qurʾān like any other phenomenon is a multifaceted object that should be studied from different points of view. As a result, this complex study will allow obtaining a three-dimensional image rather than a flat picture alone.

Keywords: Arabic, Islamic studies, linguistics, religion

Procedia PDF Downloads 223
538 Experiences of Homophobia, Machismo and Misogyny in Tourist Destinations: A Netnography in a Facebook Community of LGBT Backpackers

Authors: Renan De Caldas Honorato, Ana Augusta Ferreira De Freitas

Abstract:

Homosexuality is still criminalized in a large number of countries. In some of them, being gay or lesbian can even be punished by death. Added to this context, the experiences of social discrimination faced by the LGBT population, including homophobia, machismo and misogyny, cause numerous restrictions throughout their lives. The possibility of confronting these challenges in moments that should be pleasant, such as on a trip or on vacation, is unpleasant, to say the least. In the current scenario of intensifying the use of Social network sites (SNSs) to search for information, including in the tourist area, this work aims to analyze the sharing of tourist experiences with situations of confrontation and perceptions of homophobia, machismo and misogyny, and restrictions suffered in tourist destinations. The fieldwork is a community of LGBT backpackers based on Facebook. Netnography was the core method adopted. A qualitative approach was conducted and 463 publications posted from January to December 2020 were assessed through the computer-mediated discourse analysis (CMDA). The results suggest that these publications exist to identify the potential exposure to these offensive behaviors while traveling. Individuals affirm that the laws, positive or not, in relation to the LGBT public are not the only factors for a place to be defined as safe or not for gay travelers. The social situation of a country and its laws are quite different and this is the main target of these publications. The perception of others about the chosen destination is more important than knowing your rights and the legal status of each country and it also lessens uncertainty, even when they are never totally confident when choosing a travel destination. In certain circumstances, sexual orientation also needs to be protected from the judgment of hosts and residents. The systemic treatment of homophobic behavior and the construction of a more inclusive society are urgent.

Keywords: homophobia, hospitality, machismo, misogyny

Procedia PDF Downloads 188
537 Quantifying Stakeholders’ Values of Technical and Vocational Education and Training Provision in Nigeria

Authors: Lidimma Benjamin, Nimmyel Gwakzing, Wuyep Nanyi

Abstract:

Technical and Vocational Education and Training (TVET) has many stakeholders, each with their own values and interests. This study will focus on the diversity of the values and interests within and across groups of stakeholders by quantifying the value that stakeholders attached to several quality attributes of TVET, and also find out to what extent TVET stakeholders differ in their values. The quality of TVET therefore, depends on how well it aligns with the values and interests of these stakeholders. The five stakeholders are parents, students, teachers, policy makers, and work place training supervisors. The 9 attributes are employer appreciation of students, graduation rate, obtained computer skills of students, mentoring hours in workplace learning/Students Industrial Work Experience Scheme (SIWES), challenge, structure, students’ appreciation of teachers, schooling hours, and attention to civic education. 346 respondents (comprising Parents, Students, Teachers, Policy Makers, and Workplace Training Supervisors) were repeatedly asked to rank a set of 4 programs, each with a specific value on the nine quality indicators. Conjoint analysis was used to obtain the values that the stakeholders assigned to the 9 attributes when evaluating the quality of TVET programs. Rank-ordered logistic regression was the statistical/tool used for ranking the respondents values assign to the attributes. The similarities and diversity in values and interests of the different stakeholders will be of use by both Nigerian government and TVET colleges, to improve the overall quality of education and the match between vocational programs and their stakeholders simultaneous evaluation and combination of information in product attributes. Such approach models the decision environment by confronting a respondent with choices that are close to real-life choices. Therefore, it is more realistically than traditional survey methods.

Keywords: TVET, vignette study, conjoint analysis, quality perception, educational stakeholders

Procedia PDF Downloads 82
536 Factors Influencing Fertility Preferences and Contraceptive Use among Reproductive Aged Married Women in Eastern Ethiopia

Authors: Heroda Gebru, Berhanu Seyoum, Melake Damena, Gezahegn Tesfaye

Abstract:

Background: In Ethiopia there is a population policy aimed at reducing fertility and increasing contraceptive prevalence. Objective: To assess the fertility preference and contraceptive use status of married women who were living in Dire Dawa administrative city. Methods: Cross sectional study which included a sample size of 421 married women of reproductive age were performed. Data was collected using structured questionnaire during house to house survey and semi-structured questionnaire during in-depth interview. Data was processed and analyzed using SPSS version 16 computer software. Univariate, bi variate and multi variate analysis was employed. Results: A total of 421 married women of reproductive age group were interviewed having a response rate of 100 percent. More than half (58.2%) of the respondent have desire of more children. While 41.8% want no more children. Regarding contraceptive use 52.5% of the respondents were using contraceptive at the time of survey. Fertility preference and contraceptive use were significantly associated with age of the respondent, history of child death, number of living children, religion and age at first birth. Conclusions: Those women with younger age group, who had no child death history and women with lesser number of surviving children were more likely desire additional children. Women with older age at first birth and protestant in religion were more likely practiced contraceptive use. Strong information and education regarding contraceptive for younger age group should be provided, advocacy at level of religious leader is important, comprehensive family planning counselling and education should be available for the community, husbands, and religious leaders and the aim for increasing contraceptive use should focus on the practical aspect.

Keywords: fertility preference, contraceptive use, univariate analysis, family planning

Procedia PDF Downloads 379
535 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 194
534 Corpus-Based Model of Key Concepts Selection for the Master English Language Course "Government Relations"

Authors: Elena Pozdnyakova

Abstract:

“Government Relations” is a field of knowledge presently taught at the majority of universities around the globe. English as the default language can become the language of teaching since the issues discussed are both global and national in character. However for this field of knowledge key concepts and their word representations in English don’t often coincide with those in other languages. International master’s degree students abroad as well as students, taught the course in English at their national universities, are exposed to difficulties, connected with correct conceptualizing of terminology of GR in British and American academic traditions. The study was carried out during the GR English language course elaboration (pilot research: 2013 -2015) at Moscow State Institute of Foreign Relations (University), Russian Federation. Within this period, English language instructors designed and elaborated the three-semester course of GR. Methodologically the course design was based on elaboration model with the special focus on conceptual elaboration sequence and theoretical elaboration sequence. The course designers faced difficulties in concept selection and theoretical elaboration sequence. To improve the results and eliminate the problems with concept selection, a new, corpus-based approach was worked out. The computer-based tool WordSmith 6.0 was used with the aim to build a model of key concept selection. The corpus of GR English texts consisted of 1 million words (the study corpus). The approach was based on measuring effect size, i.e. the percent difference of the frequency of a word in the study corpus when compared to that in the reference corpus. The results obtained proved significant improvement in the process of concept selection. The corpus-based model also facilitated theoretical elaboration of teaching materials.

Keywords: corpus-based study, English as the default language, key concepts, measuring effect size, model of key concept selection

Procedia PDF Downloads 306
533 Development of Interactional Competence: Listener Responses of Long-Term Stay Abroad Chinese L1 Speakers in Australian Universities

Authors: Wei Gao

Abstract:

The current study investigates the change of listener responses in social conversations of the second language (L2) speakers who are staying abroad with Chinese L1 speakers in Australian universities and how their long-term stay abroad impacted their design for L2 recipient actions. There is a limited amount of empirical work on L2 English listener response acquisition, particularly regarding the influence of long-term stay abroad in English-speaking countries. Little is known whether the development of L2 listener responses and the improvement of interactional competence is affected by the prolonged residency in the target L2 country. Forty-eight participants were recruited, and they participated in the designed speaking task through Computer-Mediated Communication. Results showed that long-term stay abroad Chinese L1 speakers demonstrated an English-like pattern of listener responses in communication. Long-term stay abroad experience had a significant impact on L2 English listener responses production and organization in social conversation. Long-term stay abroad L1 Chinese speakers had an active and productive response in listenership than their non-stay abroad counterparts in terms of frequency and placement in producing listener responses. However, the L2 English listener response production only occurred to be partial in response tokens, such as backchannels and reactive expressions, also in resumptive openers' employment. This study shows that L2 English listener responses could be acquired during a long-term stay abroad in English-speaking countries but showed partial acquisition in collaborative finishes production. In addition, the most prominent finding was that Chinese L1 speakers changed their overall listener responses pattern from L1 Chinese to L2 English. The study reveals specific interactional changes in English L2 listener responses acquisition. It generates pedagogical implications for cross-cultural communication and L2 pragmatics acquisition during a long-term stay abroad.

Keywords: listener responses, stay abroad, interactional competence, L2 pragmatics acquisition

Procedia PDF Downloads 84
532 The Influence of Steel Connection on Fire Resistance of Composite Steel-Framed Buildings

Authors: Mohammed Kadhim, Zhaohui Huang

Abstract:

Steel connections can play an important role in enhancing the robustness of structures under fire conditions. Therefore, it is significant to examine the influence of steel connections on the fire resistance of composite steel-framed buildings. In this paper, both the behavior of steel connections and their influence on composite steel frame are analyzed using the non-linear finite element computer software VULCAN at ambient and elevated temperatures. The chosen frame is subjected to ISO834 fire. The comparison between end plate connections, pinned connection, and rigid connection has been carried out. By applying different compartment fires, some cases are studied to show the behavior of steel connection when the fire is applied at certain beams. In addition, different plate thickness and deferent applied loads have been analyzed to examine the behavior of chosen steel connection under ISO834 fire. It was found from the analytical results that the beam with extended end plate is stronger and has better performance in terms of axial forces than those beams with flush end plate connection. It was also found that extended end plate connection has highest limiting temperatures compared to the flush end plate connection. In addition, it was found that the performance of end-plate connections is very close to rigid connection and very far from pinned connections. Furthermore, plate thickness has less effect on the influence of steel connection on fire resistance. In conclusion, the behavior of composite steel framed buildings is largely dependent on the steel connection due to their high impact under fire condition. It is recommended to consider the extended end-plate in the design proposes because of its higher properties compared to the flush end plate connection. Finally, this paper shows a steel connection has an important effect on the fire resistance of composite steel framed buildings.

Keywords: composite steel-framed buildings, connection behavior, end-plate connections, finite element modeling, fire resistance

Procedia PDF Downloads 160