Search results for: graphics processing units
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5129

Search results for: graphics processing units

239 Upgrade of Value Chains and the Effect on Resilience of Russia’s Coal Industry and Receiving Regions on the Path of Energy Transition

Authors: Sergey Nikitenko, Vladimir Klishin, Yury Malakhov, Elena Goosen

Abstract:

Transition to renewable energy sources (solar, wind, bioenergy, etc.) and launching of alternative energy generation has weakened the role of coal as a source of energy. The Paris Agreement and assumption of obligations by many nations to orderly reduce CO₂ emissions by means of technological modernization and climate change adaptation has abridged coal demand yet more. This paper aims to assess current resilience of the coal industry to stress and to define prospects for coal production optimization using high technologies pursuant to global challenges and requirements of energy transition. Our research is based on the resilience concept adapted to the coal industry. It is proposed to divide the coal sector into segments depending on the prevailing value chains (VC). Four representative models of VC are identified in the coal sector. The most promising lines of upgrading VC in the coal industry include: •Elongation of VC owing to introduction of clean technologies of coal conversion and utilization; •Creation of parallel VC by means of waste management; •Branching of VC (conversion of a company’s VC into a production network). The upgrade effectiveness is governed in many ways by applicability of advanced coal processing technologies, usability of waste, expandability of production, entrance to non-rival markets and localization of new segments of VC in receiving regions. It is also important that upgrade of VC by means of formation of agile high-tech inter-industry production networks within the framework of operating surface and underground mines can reduce social, economic and ecological risks associated with closure of coal mines. Such promising route of VC upgrade is application of methanotrophic bacteria to produce protein to be used as feed-stuff in fish, poultry and cattle breeding, or in production of ferments, lipoids, sterols, antioxidants, pigments and polysaccharides. Closed mines can use recovered methane as a clean energy source. There exist methods of methane utilization from uncontrollable sources, including preliminary treatment and recovery of methane from air-and-methane mixture, or decomposition of methane to hydrogen and acetylene. Separated hydrogen is used in hydrogen fuel cells to generate power to feed the process of methane utilization and to supply external consumers. Despite the recent paradigm of carbon-free energy generation, it is possible to preserve the coal mining industry using the differentiated approach to upgrade of value chains based on flexible technologies with regard to specificity of mining companies.

Keywords: resilience, resilience concept, resilience indicator, resilience in the Russian coal industry, value chains

Procedia PDF Downloads 107
238 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset

Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik

Abstract:

Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.

Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics

Procedia PDF Downloads 76
237 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 139
236 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions

Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva

Abstract:

Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.

Keywords: banana peels, mercury removal, sorption, water treatment

Procedia PDF Downloads 155
235 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction

Authors: Sri Sai Ramya Bojedla, Falguni Pati

Abstract:

Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.

Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers

Procedia PDF Downloads 197
234 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 56
233 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria

Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan

Abstract:

Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.

Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM

Procedia PDF Downloads 138
232 Dynamic-cognition of Strategic Mineral Commodities; An Empirical Assessment

Authors: Carlos Tapia Cortez, Serkan Saydam, Jeff Coulton, Claude Sammut

Abstract:

Strategic mineral commodities (SMC) both energetic and metals have long been fundamental for human beings. There is a strong and long-run relation between the mineral resources industry and society's evolution, with the provision of primary raw materials, becoming one of the most significant drivers of economic growth. Due to mineral resources’ relevance for the entire economy and society, an understanding of the SMC market behaviour to simulate price fluctuations has become crucial for governments and firms. For any human activity, SMC price fluctuations are affected by economic, geopolitical, environmental, technological and psychological issues, where cognition has a major role. Cognition is defined as the capacity to store information in memory, processing and decision making for problem-solving or human adaptation. Thus, it has a significant role in those systems that exhibit dynamic equilibrium through time, such as economic growth. Cognition allows not only understanding past behaviours and trends in SCM markets but also supports future expectations of demand/supply levels and prices, although speculations are unavoidable. Technological developments may also be defined as a cognitive system. Since the Industrial Revolution, technological developments have had a significant influence on SMC production costs and prices, likewise allowing co-integration between commodities and market locations. It suggests a close relation between structural breaks, technology and prices evolution. SCM prices forecasting have been commonly addressed by econometrics and Gaussian-probabilistic models. Econometrics models may incorporate the relationship between variables; however, they are statics that leads to an incomplete approach of prices evolution through time. Gaussian-probabilistic models may evolve through time; however, price fluctuations are addressed by the assumption of random behaviour and normal distribution which seems to be far from the real behaviour of both market and prices. Random fluctuation ignores the evolution of market events and the technical and temporal relation between variables, giving the illusion of controlled future events. Normal distribution underestimates price fluctuations by using restricted ranges, curtailing decisions making into a pre-established space. A proper understanding of SMC's price dynamics taking into account the historical-cognitive relation between economic, technological and psychological factors over time is fundamental in attempting to simulate prices. The aim of this paper is to discuss the SMC market cognition hypothesis and empirically demonstrate its dynamic-cognitive capacity. Three of the largest and traded SMC's: oil, copper and gold, will be assessed to examine the economic, technological and psychological cognition respectively.

Keywords: commodity price simulation, commodity price uncertainties, dynamic-cognition, dynamic systems

Procedia PDF Downloads 460
231 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb

Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro

Abstract:

Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.

Keywords: meat production, natural additives, profitability, sheep

Procedia PDF Downloads 139
230 Fractional, Component and Morphological Composition of Ambient Air Dust in the Areas of Mining Industry

Authors: S.V. Kleyn, S.Yu. Zagorodnov, А.А. Kokoulina

Abstract:

Technogenic emissions of the mining and processing complex are characterized by a high content of chemical components and solid dust particles. However, each industrial enterprise and the surrounding area have features that require refinement and parameterization. Numerous studies have shown the negative impact of fine dust PM10 and PM2.5 on the health, as well as the possibility of toxic components absorption, including heavy metals by dust particles. The target of the study was the quantitative assessment of the fractional and particle size composition of ambient air dust in the area of impact by primary magnesium production complex. Also, we tried to describe the morphology features of dust particles. Study methods. To identify the dust emission sources, the analysis of the production process has been carried out. The particulate composition of the emissions was measured using laser particle analyzer Microtrac S3500 (covered range of particle size is 20 nm to 2000 km). Particle morphology and the component composition were established by electron microscopy by scanning microscope of high resolution (magnification rate - 5 to 300 000 times) with X-ray fluorescence device S3400N ‘HITACHI’. The chemical composition was identified by X-ray analysis of the samples using an X-ray diffractometer XRD-700 ‘Shimadzu’. Determination of the dust pollution level was carried out using model calculations of emissions in the atmosphere dispersion. The calculations were verified by instrumental studies. Results of the study. The results demonstrated that the dust emissions of different technical processes are heterogeneous and fractional structure is complicated. The percentage of particle sizes up to 2.5 micrometres inclusive was ranged from 0.00 to 56.70%; particle sizes less than 10 microns inclusive – 0.00 - 85.60%; particle sizes greater than 10 microns - 14.40% -100.00%. During microscopy, the presence of nanoscale size particles has been detected. Studied dust particles are round, irregular, cubic and integral shapes. The composition of the dust includes magnesium, sodium, potassium, calcium, iron, chlorine. On the base of obtained results, it was performed the model calculations of dust emissions dispersion and establishment of the areas of fine dust РМ 10 and РМ 2.5 distribution. It was found that the dust emissions of fine powder fractions PM10 and PM2.5 are dispersed over large distances and beyond the border of the industrial site of the enterprise. The population living near the enterprise is exposed to the risk of diseases associated with dust exposure. Data are transferred to the economic entity to make decisions on the measures to minimize the risks. Exposure and risks indicators on the health are used to provide named patient health and preventive care to the citizens living in the area of negative impact of the facility.

Keywords: dust emissions, еxposure assessment, PM 10, PM 2.5

Procedia PDF Downloads 261
229 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 119
228 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network

Procedia PDF Downloads 129
227 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis

Authors: Mohamed Ali Abdennadher

Abstract:

Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.

Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology

Procedia PDF Downloads 29
226 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 210
225 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 119
224 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 113
223 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 303
222 Improving Working Memory in School Children through Chess Training

Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy

Abstract:

Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.

Keywords: chess training, cognitive development, executive functions, school children, working memory

Procedia PDF Downloads 263
221 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs

Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana

Abstract:

Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.

Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs

Procedia PDF Downloads 323
220 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.

Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation

Procedia PDF Downloads 142
219 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 378
218 Boredom in the Classroom: Sentiment Analysis on Teaching Practices and Related Outcomes

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Students’ emotional experiences have been a widely discussed theme among researchers, proving a central role on students’ outcomes. Yet, up to now, far too little attention has been paid to teaching practices that negatively relate with students’ negative emotions in the higher education. The present work aims to examine the relationship between teachers’ teaching practices (i.e., students’ evaluations of teaching and autonomy support), the students’ feelings of boredom and agentic engagement and motivation in the higher education context. To do so, the present study incorporates one of the most popular tools in natural processing language to address students’ evaluations of teaching: sentiment analysis. Whereas most research has focused on the creation of SA models and assessing students’ satisfaction regarding teachers and courses to the author’s best knowledge, no research before has included results from SA into an explanatory model. A total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) participated in the study. Students were enrolled in degree and masters’ studies at the faculty of Education of a public university of Spain. Data was collected using an online questionnaire students could access through a QR code they completed during a teaching period where the assessed teacher was not present. To assess students’ sentiments towards their teachers’ teaching, we asked them the following open-ended question: “If you had to explain a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?”. Sentiment analysis was performed with Microsoft's pre-trained model. For this study, we relied on the probability of the students answer belonging to the negative category. To assess the reliability of the measure, inter-rater agreement between this NLP tool and one of the researchers, who independently coded all answers, was examined. The average pairwise percent agreement and the Cohen’s kappa were calculated with ReCal2. The agreement reached was of 90.8% and Cohen’s kappa .68, both considered satisfactory. To test the hypothesis relations a structural equation model (SEM) was estimated. Results showed that the model fit indices displayed a good fit to the data; χ² (134) = 351.129, p < .001, RMSEA = .07, SRMR = .09, TLI = .91, CFI = .92. Specifically, results show that boredom was negatively predicted by autonomy support practices (β = -.47[-.61, -.33]), whereas for the negative sentiment extracted from SET, this relation was positive (β = .23[.16, .30]). In other words, when students’ opinion towards their instructors’ teaching practices was negative, it was more likely for them to feel bored. Regarding the relations among boredom and student outcomes, results showed a negative predictive value of boredom on students’ motivation to study (β = -.46[-.63, -.29]) and agentic engagement (β = -.24[-.33, -.15]). Altogether, results show a promising future for sentiment analysis techniques in the field of education as they proved the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, boredom, motivation, agentic engagement

Procedia PDF Downloads 97
217 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance

Authors: Mina Naeini, Thomas A. Adams II

Abstract:

Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.

Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs

Procedia PDF Downloads 132
216 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency

Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader

Abstract:

Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.

Keywords: immobilization, reusability, thermozymes, xylanase

Procedia PDF Downloads 374
215 Unscrupulous Intermediaries in International Labour Migration of Nepal

Authors: Anurag Devkota

Abstract:

Foreign employment serves to be the strongest pillar in engendering employment options for a large number of the young Nepali population. Nepali workers are forced to leave the comfort of their homes and are exposed to precarious conditions while on a journey to earn enough money to live better their lives. The exponential rise in foreign labour migration has produced a snowball effect on the economy of the nation. The dramatic variation in the economic development of the state has proved to establish the fact that migration is increasingly significant for livelihood, economic development, political stability, academic discourse and policy planning in Nepal. The foreign employment practice in Nepal largely incorporates the role of individual agents in the entire process of migration. With the fraudulent acts and false promises of these agents, the problems associated with every Nepali migrant worker starts at home. The workers encounter tremendous pre-departure malpractice and exploitation at home by different individual agents during different stages of processing. Although these epidemic and repetitive ill activities of intermediaries are dominant and deeply rooted, the agents have been allowed to walk free in the absence of proper laws to curb their wrongdoings and misconduct. It has been found that the existing regulatory mechanisms have not been utilised to their full efficacy and often fall short in addressing the actual concerns of the workers because of the complex legal and judicial procedures. Structural changes in the judicial setting will help bring perpetrators under the law and victims towards access to justice. Thus, a qualitative improvement of the overall situation of Nepali migrant workers calls for a proper 'regulatory' arrangement vis-à-vis these brokers. Hence, the author aims to carry out a doctrinal study using reports and scholarly articles as a major source of data collection. Various reports published by different non-governmental and governmental organizations working in the field of labour migration will be examined and the research will focus on the inductive and deductive data analysis. Hence, the real challenge of establishing a pro-migrant worker regime in recent times is to bring the agents under the jurisdiction of the court in Nepal. The Gulf Visit Study Report, 2017 prepared and launched by the International Relation and Labour Committee of Legislature-Parliament of Nepal finds that solving the problems at home solves 80 percent of the problems concerning migrant workers in Nepal. Against this backdrop, this research study is intended to determine the ways and measures to curb the role of agents in the foreign employment and labour migration process of Nepal. It will further dig deeper into the regulatory mechanisms of Nepal and map out essential determinant behind the impunity of agents.

Keywords: foreign employment, labour migration, human rights, migrant workers

Procedia PDF Downloads 116
214 Impact of Chess Intervention on Cognitive Functioning of Children

Authors: Ebenezer Joseph

Abstract:

Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.

Keywords: chess, intelligence, creativity, children

Procedia PDF Downloads 257
213 Sustainable Solid Waste Management Solutions for Asian Countries Using the Potential in Municipal Solid Waste of Indian Cities

Authors: S. H. Babu Gurucharan, Priyanka Kaushal

Abstract:

Majority of the world's population is expected to live in the Asia and Pacific region by 2050 and thus their cities will generate the maximum waste. India, being the second populous country in the world, is an ideal case study to identify a solution for Asian countries. Waste minimisation and utilisation have always been part of the Indian culture. During rapid urbanisation, our society lost the art of waste minimisation and utilisation habits. Presently, Waste is not considered as a resource, thus wasting an opportunity to tap resources. The technologies in vogue are not suited for effective treatment of large quantities of generated solid waste, without impacting the environment and the population. If not treated efficiently, Waste can become a silent killer. The article is trying to highlight the Indian municipal solid waste scenario as a key indicator of Asian waste management and recommend sustainable waste management and suggest effective solutions to treat the Solid Waste. The methods followed during the research were to analyse the solid waste data on characteristics of solid waste generated in Indian cities, then evaluate the current technologies to identify the most suitable technology in Indian conditions with minimal environmental impact, interact with the technology technical teams, then generate a technical process specific to Indian conditions and further examining the environmental impact and advantages/ disadvantages of the suggested process. The most important finding from the study was the recognition that most of the current municipal waste treatment technologies being employed, operate sub-optimally in Indian conditions. Therefore, the study using the available data, generated heat and mass balance of processes to arrive at the final technical process, which was broadly divided into Waste processing, Waste Treatment, Power Generation, through various permutations and combinations at each stage to ensure that the process is techno-commercially viable in Indian conditions. Then environmental impact was arrived through secondary sources and a comparison of environmental impact of different technologies was tabulated. The major advantages of the suggested process are the effective use of waste for resource generation both in terms of maximised power output or conversion to eco-friendly products like biofuels or chemicals using advanced technologies, minimum environmental impact and the least landfill requirement. The major drawbacks are the capital, operations and maintenance costs. The existing technologies in use in Indian municipalities have their own limitations and the shortlisted technology is far superior to other technologies in vogue. Treatment of Municipal Solid Waste with an efficient green power generation is possible through a combination of suitable environment-friendly technologies. A combination of bio-reactors and plasma-based gasification technology is most suitable for Indian Waste and in turn for Asian waste conditions.

Keywords: calorific value, gas fermentation, landfill, municipal solid waste, plasma gasification, syngas

Procedia PDF Downloads 184
212 Assessing Mycotoxin Exposure from Processed Cereal-Based Foods for Children

Authors: Soraia V. M. de Sá, Miguel A. Faria, José O. Fernandes, Sara C. Cunha

Abstract:

Cereals play a vital role in fulfilling the nutritional needs of children, supplying essential nutrients crucial for their growth and development. However, concerns arise due to children's heightened vulnerability due to their unique physiology, specific dietary requirements, and relatively higher intake in relation to their body weight. This vulnerability exposes them to harmful food contaminants, particularly mycotoxins, prevalent in cereals. Because of the thermal stability of mycotoxins, conventional industrial food processing often falls short of eliminating them. Children, especially those aged 4 months to 12 years, frequently encounter mycotoxins through the consumption of specialized food products, such as instant foods, breakfast cereals, bars, cookie snacks, fruit puree, and various dairy items. A close monitoring of this demographic group's exposure to mycotoxins is essential, as toxins ingestion may weaken children’s immune systems, reduce their resistance to infectious diseases, and potentially lead to cognitive impairments. The severe toxicity of mycotoxins, some of which are classified as carcinogenic, has spurred the establishment and ongoing revision of legislative limits on mycotoxin levels in food and feed globally. While EU Commission Regulation 1881/2006 addresses well-known mycotoxins in processed cereal-based foods and infant foods, the absence of regulations specifically addressing emerging mycotoxins underscores a glaring gap in the regulatory framework, necessitating immediate attention. Emerging mycotoxins have gained mounting scrutiny in recent years due to their pervasive presence in various foodstuffs, notably cereals and cereal-based products. Alarmingly, exposure to multiple mycotoxins is hypothesized to exhibit higher toxicity than isolated effects, raising particular concerns for products primarily aimed at children. This study scrutinizes the presence of 22 mycotoxins of the diverse range of chemical classes in 148 processed cereal-based foods, including 39 breakfast cereals, 25 infant formulas, 27 snacks, 25 cereal bars, and 32 cookies commercially available in Portugal. The analytical approach employed a modified QuEChERS procedure followed by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. Given the paucity of information on the risk assessment of children to multiple mycotoxins in cereal and cereal-based products consumed by children of Portugal pioneers the evaluation of this critical aspect. Overall, aflatoxin B1 (AFB1) and aflatoxin G2 (AFG2) emerged as the most prevalent regulated mycotoxins, while enniatin B (ENNB) and sterigmatocystin (STG) were the most frequently detected emerging mycotoxins.

Keywords: cereal-based products, children´s nutrition, food safety, UPLC-MS/MS analysis

Procedia PDF Downloads 71
211 Regeneration of Cesium-Exhausted Activated Carbons by Microwave Irradiation

Authors: Pietro P. Falciglia, Erica Gagliano, Vincenza Brancato, Alfio Catalfo, Guglielmo Finocchiaro, Guido De Guidi, Stefano Romano, Paolo Roccaro, Federico G. A. Vagliasindi

Abstract:

Cesium-137 (¹³⁷Cs) is a major radionuclide in spent nuclear fuel processing, and it represents the most important cause of contamination related to nuclear accidents. Cesium-137 has long-term radiological effects representing a major concern for the human health. Several physico-chemical methods have been proposed for ¹³⁷Cs removal from impacted water: ion-exchange, adsorption, chemical precipitation, membrane process, coagulation, and electrochemical. However, these methods can be limited by ionic selectivity and efficiency, or they present very restricted full-scale application due to equipment and chemical high costs. On the other hand, adsorption is considered a more cost-effective solution, and activated carbons (ACs) are known as a low-cost and effective adsorbent for a wide range of pollutants among which radionuclides. However, adsorption of Cs onto ACs has been investigated in very few and not exhaustive studies. In addition, exhausted activated carbons are generally discarded in landfill, that is not an eco-friendly and economic solution. Consequently, the regeneration of exhausted ACs must be considered a preferable choice. Several alternatives, including conventional thermal-, solvent-, biological- and electrochemical-regeneration, are available but are affected by several economic or environmental concerns. Microwave (MW) irradiation has been widely used in industrial and environmental applications and it has attracted many attentions to regenerating activated carbons. The growing interest in MW irradiation is based on the passive ability of the irradiated medium to convert a low power irradiation energy into a rapid and large temperature increase if the media presents good dielectric features. ACs are excellent MW-absorbers, with a high mechanical strength and a good resistance towards heating process. This work investigates the feasibility of MW irradiation for the regeneration of Cs-exhausted ACs. Adsorption batch experiments were carried out using commercially available granular activated carbon (GAC), then Cs-saturated AC samples were treated using a controllable bench-scale 2.45-GHz MW oven and investigating different adsorption-regeneration cycles. The regeneration efficiency (RE), weight loss percentage, and textural properties of the AC samples during the adsorption-regeneration cycles were also assessed. Main results demonstrated a relatively low adsorption capacity for Cs, although the feasibility of ACs was strictly linked to their dielectric nature, which allows a very efficient thermal regeneration by MW irradiation. The weight loss percentage was found less than 2%, and an increase in RE after three cycles was also observed. Furthermore, MW regeneration preserved the pore structure of the regenerated ACs. For a deeper exploration of the full-scale applicability of MW regeneration, further investigations on more adsorption-regeneration cycles or using fixed-bed columns are required.

Keywords: adsorption mechanisms, cesium, granular activated carbons, microwave regeneration

Procedia PDF Downloads 141
210 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.

Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training

Procedia PDF Downloads 107