Search results for: blockchain technologies
531 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 43530 Optimized Deep Learning-Based Facial Emotion Recognition System
Authors: Erick C. Valverde, Wansu Lim
Abstract:
Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.Keywords: deep learning, face detection, facial emotion recognition, network optimization methods
Procedia PDF Downloads 120529 Increased Efficiency during Oxygen Carrier Aided Combustion of Municipal Solid Waste in an Industrial Scaled Circulating Fluidized Bed-Boiler
Authors: Angelica Corcoran, Fredrik Lind, Pavleta Knutsson, Henrik Thunman
Abstract:
Solid waste volumes are at current predominately deposited on landfill. Furthermore, the impending climate change requires new solutions for a sustainable future energy mix. Currently, solid waste is globally utilized to small extent as fuel during combustion for heat and power production. Due to its variable composition and size, solid waste is considered difficult to combust and requires a technology with high fuel flexibility. One of the commercial technologies used for combustion of such difficult fuels is circulating fluidized beds (CFB). In a CFB boiler, fine particles of a solid material are used as 'bed material', which is accelerated by the incoming combustion air that causes the bed material to fluidize. The chosen bed material has conventionally been silica sand with the main purpose of being a heat carrier, as it transfers heat released by the combustion to the heat-transfer surfaces. However, the release of volatile compounds occurs rapidly in comparison with the lateral mixing in the combustion chamber. To ensure complete combustion a surplus of air is introduced, which decreases the total efficiency of the boiler. In recent years, the concept of partly or entirely replacing the silica sand with an oxygen carrier as bed material has been developed. By introducing an oxygen carrier to the combustion chamber, combustion can be spread out both temporally and spatially in the boiler. Specifically, the oxygen carrier can take up oxygen from the combustion air where it is in abundance and release it to combustible gases where oxygen is in deficit. The concept is referred to as oxygen carrier aided combustion (OCAC) where the natural ore ilmenite (FeTiO3) has been the oxygen carrier used. The authors have validated the oxygen buffering ability of ilmenite during combustion of biomass in Chalmers 12-MWth CFB boiler in previous publications. Furthermore, the concept has been demonstrated on full industrial scale during combustion of municipal solid waste (MSW) in E.ON’s 75 MWth CFB boiler. The experimental campaigns have showed increased mass transfer of oxygen inside the boiler when combustion both biomass and MSW. As a result, a higher degree of burnout is achieved inside the combustion chamber and the plant can be operated at a lower surplus of air. Moreover, the buffer of oxygen provided by the oxygen carrier makes the system less sensitive to disruptions in operation. In conclusion, combusting difficult fuels with OCAC results in higher operation stability and an increase in boiler efficiency.Keywords: OCAC, ilmenite, combustion, CFB
Procedia PDF Downloads 240528 Ecosystem, Environment Being Threatened by the Activities of Major Industries
Authors: Charles Akinola Imolehin
Abstract:
According to the news on world population record, over 6.6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future. To say the fact, human civilization is now the dominant cause of change in the global environment. Now that human relationship to the earth has change so utterly, there is need to see to that change and understand its implication. These are two aspects to the challenges which all should believe. The first is to realize that human activity has power to harm the earth and can indeed have global and even permanent effects. Second is to realize that the only way to understand human new role as a co-architect of nature is to see human activities as part of a complex system that does operate according to the same simple rules of cause and effect commonly used to. So, understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because believing in Sustainable Development is a matter of reconciling respect for the environment, social equity, and economic profitability. Also, there is strong believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why is important to always have it at the heart of business policy that the environmental problem is not our effect on the environment so much as the relationship of production activities on the environment. There should be this positive thinking in all operation to always be environmentally friendly especially in projection and considering Sustainable ALL awareness in all sites of operation.Keywords: earth's ocean, marine animals life under treat, flooding, ctritical natiural resouces polluted
Procedia PDF Downloads 20527 Integrating Reactive Chlorine Species Generation with H2 Evolution in a Multifunctional Photoelectrochemical System for Low Operational Carbon Emissions Saline Sewage Treatment
Authors: Zexiao Zheng, Irene M. C. Lo
Abstract:
Organic pollutants, ammonia, and bacteria are major contaminants in sewage, which may adversely impact ecosystems without proper treatment. Conventional wastewater treatment plants (WWTPs) are operated to remove these contaminants from sewage but suffer from high carbon emissions and are powerless to remove emerging organic pollutants (EOPs). Herein, we have developed a low operational carbon emissions multifunctional photoelectrochemical (PEC) system for saline sewage treatment to simultaneously remove organic compounds, ammonia, and bacteria, coupled with H2 evolution. A reduced BiVO4 (r-BiVO4) with improved PEC properties due to the construction of oxygen vacancies and V4+ species was developed for the multifunctional PEC system. The PEC/r-BiVO4 process could treat saline sewage to meet local WWTPs’ discharge standard in 40 minutes at 2.0 V vs. Ag/AgCl and completely degrade carbamazepine (one of the EOPs), coupled with significant evolution of H2. A remarkable reduction in operational carbon emissions was achieved by the PEC/r-BiVO4 process compared with large-scale WWTPs, attributed to the restrained direct carbon emissions from the generation of greenhouse gases. Mechanistic investigation revealed that the PEC system could activate chloride ions in sewage to generate reactive chlorine species and facilitate •OH production, promoting contaminants removal. The PEC system exhibited operational feasibility at different pH and total suspended solids concentrations and has outstanding reusability and stability, confirming its promising practical potential. The study combined the simultaneous removal of three major contaminants from saline sewage and H2 evolution in a single PEC process, demonstrating a viable approach to supplementing and extending the existing wastewater treatment technologies. The study generated profound insights into the in-situ activation of existing chloride ions in sewage for contaminants removal and offered fundamental theories for applying the PEC system in sewage remediation with low operational carbon emissions. The developed PEC system can fit well with the future needs of wastewater treatment because of the following features: (i) low operational carbon emissions, benefiting the carbon neutrality process; (ii) higher quality of the effluent due to the elimination of EOPs; (iii) chemical-free in the operation of sewage treatment; (iv) easy reuse and recycling without secondary pollution.Keywords: contaminants removal, H2 evolution, multifunctional PEC system, operational carbon emissions, saline sewage treatment, r-BiVO4 photoanodes
Procedia PDF Downloads 161526 Plasma Arc Burner for Pulverized Coal Combustion
Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava
Abstract:
Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.Keywords: coal combustion, plasma arc, plasma torches, pulverized coal
Procedia PDF Downloads 161525 Sustainable Treatment of Vegetable Oil Industry Wastewaters by Xanthomonas campestris
Authors: Bojana Ž. Bajić, Siniša N. Dodić, Vladimir S. Puškaš, Jelena M. Dodić
Abstract:
Increasing industrialization as a response to the demands of the consumer society greatly exploits resources and generates large amounts of waste effluents in addition to the desired product. This means it is a priority to implement technologies with the maximum utilization of raw materials and energy, minimum generation of waste effluents and/or their recycling (secondary use). Considering the process conditions and the nature of the raw materials used by the vegetable oil industry, its wastewaters can be used as substrates for the biotechnological production which requires large amounts of water. This way the waste effluents of one branch of industry become raw materials for another branch which produces a new product while reducing wastewater pollution and thereby reducing negative environmental impacts. Vegetable oil production generates wastewaters during the process of rinsing oils and fats which contain mainly fatty acid pollutants. The vegetable oil industry generates large amounts of waste effluents, especially in the processes of degumming, deacidification, deodorization and neutralization. Wastewaters from the vegetable oil industry are generated during the whole year in significant amounts, based on the capacity of the vegetable oil production. There are no known alternative applications for these wastewaters as raw materials for the production of marketable products. Since the literature has no data on the potential negative impact of fatty acids on the metabolism of the bacterium Xanthomonas campestris, these wastewaters were considered as potential raw materials for the biotechnological production of xanthan. In this research, vegetable oil industry wastewaters were used as the basis for the cultivation media for xanthan production with Xanthomonas campestris ATCC 13951. Examining the process of biosynthesis of xanthan on vegetable oil industry wastewaters as the basis for the cultivation media was performed to obtain insight into the possibility of its use in the aforementioned biotechnological process. Additionally, it was important to experimentally determine the absence of substances that have an inhibitory effect on the metabolism of the production microorganism. Xanthan content, rheological parameters of the cultivation media, carbon conversion into xanthan and conversions of the most significant nutrients for biosynthesis (carbon, nitrogen and phosphorus sources) were determined as indicators of the success of biosynthesis. The obtained results show that biotechnological production of the biopolymer xanthan by bacterium Xanthomonas campestris on vegetable oil industry wastewaters based cultivation media simultaneously provides preservation of the environment and economic benefits which is a sustainable solution to the problem of wastewater treatment.Keywords: biotechnology, sustainable bioprocess, vegetable oil industry wastewaters, Xanthomonas campestris
Procedia PDF Downloads 151524 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 113523 Comparative Study of Various Treatment Positioning Technique: A Site Specific Study-CA. Breast
Authors: Kamal Kaushik, Dandpani Epili, Ajay G. V., Ashutosh, S. Pradhaan
Abstract:
Introduction: Radiation therapy has come a long way over a period of decades, from 2-dimensional radiotherapy to intensity-modulated radiation therapy (IMRT) or VMAT. For advanced radiation therapy, we need better patient position reproducibility to deliver precise and quality treatment, which raises the need for better image guidance technologies for precise patient positioning. This study presents a two tattoo simulation with roll correction technique which is comparable to other advanced patient positioning techniques. Objective: This is a site-specific study is aimed to perform a comparison between various treatment positioning techniques used for the treatment of patients of Ca- Breast undergoing radiotherapy. In this study, we are comparing 5 different positioning methods used for the treatment of ca-breast, namely i) Vacloc with 3 tattoos, ii) Breast board with three tattoos, iii) Thermoplastic cast with three fiducials, iv) Breast board with a thermoplastic mask with 3 tattoo, v) Breast board with 2 tattoos – A roll correction method. Methods and material: All in one (AIO) solution immobilization was used in all patient positioning techniques for immobilization. The process of two tattoo simulations includes positioning of the patient with the help of a thoracic-abdomen wedge, armrest & knee rest. After proper patient positioning, we mark two tattoos on the treatment side of the patient. After positioning, place fiducials as per the clinical borders markers (1) sternum notch (lower border of clavicle head) (2) 2 cm below from contralateral breast (3) midline between 1 & 2 markers (4) mid axillary on the same axis of 3 markers (Marker 3 & 4 should be on the same axis). During plan implementation, a roll depth correction is applied as per the anterior and lateral positioning tattoos, followed by the shifts required for the Isocentre position. The shifts are then verified by SSD on the patient surface followed by radiographic verification using Cone Beam Computed Tomography (CBCT). Results: When all the five positioning techniques were compared all together, the produced shifts in Vertical, Longitudinal and lateral directions are as follows. The observations clearly suggest that the Longitudinal average shifts in two tattoo roll correction techniques are less than every other patient positioning technique. Vertical and lateral Shifts are also comparable to other modern positioning techniques. Concluded: The two tattoo simulation with roll correction technique provides us better patient setup with a technique that can be implemented easily in most of the radiotherapy centers across the developing nations where 3D verification techniques are not available along with delivery units as the shifts observed are quite minimal and are comparable to those with Vacloc and modern amenities.Keywords: Ca. breast, breast board, roll correction technique, CBCT
Procedia PDF Downloads 135522 Delving into the Concept of Social Capital in the Smart City Research
Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh
Abstract:
Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.Keywords: smart city, urban digitalisation, ICT, social capital
Procedia PDF Downloads 15521 Detailed Analysis of Mechanism of Crude Oil and Surfactant Emulsion
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
A number of surfactants which exhibit ultra-low interfacial tension and an excellent microemulsion phase behavior with crude oils of low to medium gravity are not sufficiently soluble at optimum salinity to produce stable aqueous solutions. Such solutions often show phase separation after a few days at reservoir temperature, which does not suffice the purpose and the time is short when compared to the residence time in a reservoir for a surfactant flood. The addition of polymer often exacerbates the problem although the poor stability of the surfactant at high salinity remains a pivotal issue. Surfactants such as SDS, Ctab with large hydrophobes produce lowest IFT, but are often not sufficiently water soluble at desired salinity. Hydrophilic co-solvents and/or co-surfactants are needed to make the surfactant-polymer solution stable at the desired salinity. This study focuses on contrasting the effect of addition of a co-solvent in stability of a surfactant –oil emulsion. The idea is to use a co-surfactant to increase stability of an emulsion. Stability of the emulsion is enhanced because of creation of micro-emulsion which is verified both visually and with the help of particle size analyzer at varying concentration of salinity, surfactant and co-surfactant. A lab-experimental method description is provided and the method is described in detail to permit readers to emulate all results. The stability of the oil-water emulsion is visualized with respect to time, temperature, salinity of the brine and concentration of the surfactant. Nonionic surfactant TX-100 when used as a co-surfactant increases the stability of the oil-water emulsion. The stability of the prepared emulsion is checked by observing the particle size distribution. For stable emulsion in volume% vs particle size curve, the peak should be obtained for particle size of 5-50 nm while for the unstable emulsion a bigger sized particles are observed. The UV-Visible spectroscopy is also used to visualize the fraction of oil that plays important role in the formation of micelles in stable emulsion. This is important as the study will help us to decide applicability of the surfactant based EOR method for a reservoir that contains a specific type of crude. The use of nonionic surfactant as a co-surfactant would also increase the efficiency of surfactant EOR. With the decline in oil discoveries during the last decades it is believed that EOR technologies will play a key role to meet the energy demand in years to come. Taking this into consideration, the work focuses on the optimization of the secondary recovery(Water flooding) with the help of surfactant and/or co-surfactants by creating desired conditions in the reservoir.Keywords: co-surfactant, enhanced oil recovery, micro-emulsion, surfactant flooding
Procedia PDF Downloads 252520 The Effective Use of the Network in the Distributed Storage
Authors: Mamouni Mohammed Dhiya Eddine
Abstract:
This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface
Procedia PDF Downloads 222519 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 85518 Long-Term Exposure, Health Risk, and Loss of Quality-Adjusted Life Expectancy Assessments for Vinyl Chloride Monomer Workers
Authors: Tzu-Ting Hu, Jung-Der Wang, Ming-Yeng Lin, Jin-Luh Chen, Perng-Jy Tsai
Abstract:
The vinyl chloride monomer (VCM) has been classified as group 1 (human) carcinogen by the IARC. Workers exposed to VCM are known associated with the development of the liver cancer and hence might cause economical and health losses. Particularly, for those work for the petrochemical industry have been seriously concerned in the environmental and occupational health field. Considering assessing workers’ health risks and their resultant economical and health losses requires the establishment of long-term VCM exposure data for any similar exposure group (SEG) of interest, the development of suitable technologies has become an urgent and important issue. In the present study, VCM exposures for petrochemical industry workers were determined firstly based on the database of the 'Workplace Environmental Monitoring Information Systems (WEMIS)' provided by Taiwan OSHA. Considering the existence of miss data, the reconstruction of historical exposure techniques were then used for completing the long-term exposure data for SEGs with routine operations. For SEGs with non-routine operations, exposure modeling techniques, together with their time/activity records, were adopted for determining their long-term exposure concentrations. The Bayesian decision analysis (BDA) was adopted for conducting exposure and health risk assessments for any given SEG in the petrochemical industry. The resultant excessive cancer risk was then used to determine the corresponding loss of quality-adjusted life expectancy (QALE). Results show that low average concentrations can be found for SEGs with routine operations (e.g., VCM rectification 0.0973 ppm, polymerization 0.306 ppm, reaction tank 0.33 ppm, VCM recovery 1.4 ppm, control room 0.14 ppm, VCM storage tanks 0.095 ppm and wastewater treatment 0.390 ppm), and the above values were much lower than that of the permissible exposure limit (PEL; 3 ppm) of VCM promulgated in Taiwan. For non-routine workers, though their high exposure concentrations, their low exposure time and frequencies result in low corresponding health risks. Through the consideration of exposure assessment results, health risk assessment results, and QALE results simultaneously, it is concluded that the proposed method was useful for prioritizing SEGs for conducting exposure abatement measurements. Particularly, the obtained QALE results further indicate the importance of reducing workers’ VCM exposures, though their exposures were low as in comparison with the PEL and the acceptable health risk.Keywords: exposure assessment, health risk assessment, petrochemical industry, quality-adjusted life years, vinyl chloride monomer
Procedia PDF Downloads 195517 Further Development of Offshore Floating Solar and Its Design Requirements
Authors: Madjid Karimirad
Abstract:
Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.Keywords: floating solar, offshore renewable energy, wind and wave loading, design space
Procedia PDF Downloads 80516 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 131515 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina
Procedia PDF Downloads 92514 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure
Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano
Abstract:
Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security
Procedia PDF Downloads 166513 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 155512 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures
Authors: Sunaitan Al Mutairi
Abstract:
A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays
Procedia PDF Downloads 318511 Performance Assessment of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser during ‘Hot Standby’ Operation
Authors: M. J. Baum, B. Gibbes, A. Grinham, S. Albert, D. Gale, P. Fisher
Abstract:
Alongside the rapid expansion of Seawater Reverse Osmosis technologies there is a concurrent increase in the production of hypersaline brine by-products. To minimize environmental impact, these by-products are commonly disposed into open-coastal environments via submerged diffuser systems as inclined dense jet outfalls. Despite the widespread implementation of this process, diffuser designs are typically based on small-scale laboratory experiments under idealistic quiescent conditions. Studies concerning diffuser performance in the field are limited. A set of experiments were conducted to assess the near field characteristics of brine disposal at the Gold Coast Desalination Plant offshore multiport diffuser. The aim of the field experiments was to determine the trajectory and dilution characteristics of the plume under various discharge configurations with production ranging 66 – 100% of plant operative capacity. The field monitoring system employed an unprecedented static array of temperature and electrical conductivity sensors in a three-dimensional grid surrounding a single diffuser port. Complimenting these measurements, Acoustic Doppler Current Profilers were also deployed to record current variability over the depth of the water column and wave characteristics. Recorded data suggested the open-coastal environment was highly active over the experimental duration with ambient velocities ranging 0.0 – 0.5 m∙s-1, with considerable variability over the depth of the water column observed. Variations in background electrical conductivity corresponding to salinity fluctuations of ± 1.7 g∙kg-1 were also observed. Increases in salinity were detected during plant operation and appeared to be most pronounced 10 – 30 m from the diffuser, consistent with trajectory predictions described by existing literature. Plume trajectories and respective dilutions extrapolated from salinity data are compared with empirical scaling arguments. Discharge properties were found to adequately correlate with modelling projections. Temporal and spatial variation of background processes and their subsequent influence upon discharge outcomes are discussed with a view to incorporating the influence of waves and ambient currents in the design of brine outfalls into the future.Keywords: brine disposal, desalination, field study, negatively buoyant discharge
Procedia PDF Downloads 239510 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context
Authors: Eduardas Spiriajevas
Abstract:
Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.Keywords: demographic shrinkage, economic geography, Lithuania, regions
Procedia PDF Downloads 161509 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636
Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa
Abstract:
Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans
Procedia PDF Downloads 141508 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 188507 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 100506 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 265505 Understanding the Benefits of Multiple-Use Water Systems (MUS) for Smallholder Farmers in the Rural Hills of Nepal
Authors: RAJ KUMAR G.C.
Abstract:
There are tremendous opportunities to maximize smallholder farmers’ income from small-scale water resource development through micro irrigation and multiple-use water systems (MUS). MUS are an improved water management approach, developed and tested successfully by iDE that pipes water to a community both for domestic use and for agriculture using efficient micro irrigation. Different MUS models address different landscape constraints, water demand, and users’ preferences. MUS are complemented by micro irrigation kits, which were developed by iDE to enable farmers to grow high-value crops year-round and to use limited water resources efficiently. Over the last 15 years, iDE’s promotion of the MUS approach has encouraged government and other key stakeholders to invest in MUS for better planning of scarce water resources. Currently, about 60% of the cost of MUS construction is covered by the government and community. Based on iDE’s experience, a gravity-fed MUS costs approximately $125 USD per household to construct, and it can increase household income by $300 USD per year. A key element of the MUS approach is keeping farmers well linked to input supply systems and local produce collection centers, which helps to ensure that the farmers can produce a sufficient quantity of high-quality produce that earns a fair price. This process in turn creates an enabling environment for smallholders to invest in MUS and micro irrigation. Therefore, MUS should be seen as an integrated package of interventions –the end users, water sources, technologies, and the marketplace– that together enhance technical, financial, and institutional sustainability. Communities are trained to participate in sustainable water resource management as a part of the MUS planning and construction process. The MUS approach is cost-effective, improves community governance of scarce water resources, helps smallholder farmers to improve rural health and livelihoods, and promotes gender equity. MUS systems are simple to maintain and communities are trained to ensure that they can undertake minor maintenance procedures themselves. All in all, the iDE Nepal MUS offers multiple benefits and represents a practical and sustainable model of the MUS approach. Moreover, there is a growing national consensus that rural water supply systems should be designed for multiple uses, acknowledging that substantial work remains in developing national-level and local capacity and policies for scale-up.Keywords: multiple-use water systems , small scale water resources, rural livelihoods, practical and sustainable model
Procedia PDF Downloads 291504 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis
Authors: Tefera Kebede Leyu
Abstract:
The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step
Procedia PDF Downloads 77503 Criminal Law and Internet of Things: Challenges and Threats
Authors: Celina Nowak
Abstract:
The development of information and communication technologies (ICT) and a consequent growth of cyberspace have become a reality of modern societies. The newest addition to this complex structure has been Internet of Things which is due to the appearance of smart devices. IoT creates a new dimension of the network, as the communication is no longer the domain of just humans, but has also become possible between devices themselves. The possibility of communication between devices, devoid of human intervention and real-time supervision, generated new societal and legal challenges. Some of them may and certainly will eventually be connected to criminal law. Legislators both on national and international level have been struggling to cope with this technologically evolving environment in order to address new threats created by the ICT. There are legal instruments on cybercrime, however imperfect and not of universal scope, sometimes referring to specific types of prohibited behaviors undertaken by criminals, such as money laundering, sex offences. However, the criminal law seems largely not prepared to the challenges which may arise because of the development of IoT. This is largely due to the fact that criminal law, both on national and international level, is still based on the concept of perpetration of an offence by a human being. This is a traditional approach, historically and factually justified. Over time, some legal systems have developed or accepted the possibility of commission of an offence by a corporation, a legal person. This is in fact a legal fiction, as a legal person cannot commit an offence as such, it needs humans to actually behave in a certain way on its behalf. Yet, the legislators have come to understand that corporations have their own interests and may benefit from crime – and therefore need to be penalized. This realization however has not been welcome by all states and still give rise to doubts of ontological and theoretical nature in many legal systems. For this reason, in many legislations the liability of legal persons for commission of an offence has not been recognized as criminal responsibility. With the technological progress and the growing use of IoT the discussions referring to criminal responsibility of corporations seem rather inadequate. The world is now facing new challenges and new threats related to the ‘smart’ things. They will have to be eventually addressed by legislators if they want to, as they should, to keep up with the pace of technological and societal evolution. This will however require a reevaluation and possibly restructuring of the most fundamental notions of modern criminal law, such as perpetration, guilt, participation in crime. It remains unclear at this point what norms and legal concepts will be and may be established. The main goal of the research is to point out to the challenges ahead of the national and international legislators in the said context and to attempt to formulate some indications as to the directions of changes, having in mind serious threats related to privacy and security related to the use of IoT.Keywords: criminal law, internet of things, privacy, security threats
Procedia PDF Downloads 164502 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 174