Search results for: industrial networks
1201 Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates
Authors: Amirhossein Lotfi, Huaizhong Li, Dzung Viet Dao
Abstract:
Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.Keywords: natural fiber reinforced composites, delamination, thrust force, machinability
Procedia PDF Downloads 1281200 Recovery of Au and Other Metals from Old Electronic Components by Leaching and Liquid Extraction Process
Authors: Tomasz Smolinski, Irena Herdzik-Koniecko, Marta Pyszynska, M. Rogowski
Abstract:
Old electronic components can be easily found nowadays. Significant quantities of valuable metals such as gold, silver or copper are used for the production of advanced electronic devices. Old useless electronic device slowly became a new source of precious metals, very often more efficient than natural. For example, it is possible to recover more gold from 1-ton personal computers than seventeen tons of gold ore. It makes urban mining industry very profitable and necessary for sustainable development. For the recovery of metals from waste of electronic equipment, various treatment options based on conventional physical, hydrometallurgical and pyrometallurgical processes are available. In this group hydrometallurgy processes with their relatively low capital cost, low environmental impact, potential for high metal recoveries and suitability for small scale applications, are very promising options. Institute of Nuclear Chemistry and Technology has great experience in hydrometallurgy processes especially focused on recovery metals from industrial and agricultural wastes. At the moment, urban mining project is carried out. The method of effective recovery of valuable metals from central processing units (CPU) components has been developed. The principal processes such as acidic leaching and solvent extraction were used for precious metals recovery from old processors and graphic cards. Electronic components were treated by acidic solution at various conditions. Optimal acid concentration, time of the process and temperature were selected. Precious metals have been extracted to the aqueous phase. At the next step, metals were selectively extracted by organic solvents such as oximes or tributyl phosphate (TBP) etc. Multistage mixer-settler equipment was used. The process was optimized.Keywords: electronic waste, leaching, hydrometallurgy, metal recovery, solvent extraction
Procedia PDF Downloads 1371199 High Temperature and High Pressure Purification of Hydrogen from Syngas Using Metal Organic Framework Adsorbent
Authors: Samira Rostom, Robert Symonds, Robin W. Hughes
Abstract:
Hydrogen is considered as one of the most important clean and renewable energy carriers for a sustainable energy future. However, its efficient and cost-effective purification remains challenging. This paper presents the potential of using metal–organic frameworks (MOFs) in combination with pressure swing adsorption (PSA) technology for syngas based H2 purification. PSA process analysis is done considering high pressure and elevated temperature process conditions, it reduces the demand for off-gas recycle to the fuel reactor and simultaneously permits higher desorption pressure, thereby reducing the parasitic load on the hydrogen compressor. The elevated pressure and temperature adsorption we present here is beneficial to minimizing overall process heating and cooling demand compared to existing processes. Here, we report the comparative performance of zeolite-5A, Cu-BTC, and the mix of zeolite-5A/Cu-BTC for H2 purification from syngas typical of those exiting water-gas-shift reactors. The MOFs were synthesized hydrothermally and then mixed systematically at different weight ratios to find the optimum composition based on the adsorption performance. The formation of different compounds were characterized by XRD, N2 adsorption and desorption, SEM, FT-IR, TG, and water vapor adsorption technologies. Single-component adsorption isotherms of CO2, CO, CH4, N2, and H2 over single materials and composites were measured at elevated pressures and different temperatures to determine their equilibrium adsorption capacity. The examination of the stability and regeneration performance of metal–organic frameworks was carried out using a gravimetric system at temperature ranges of 25-150℃ for a pressure range of 0-30 bar. The studies of adsorption/desorption on the MOFs showed selective adsorption of CO2, CH4, CO, and N2 over H2. Overall, the findings of this study suggest that the Ni-MOF-74/Cu-BTC composites are promising candidates for industrial H2 purification processes.Keywords: MOF, H2 purification, high T, PSA
Procedia PDF Downloads 1011198 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1251197 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator
Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac
Abstract:
With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy
Procedia PDF Downloads 2131196 Optimization of Solar Rankine Cycle by Exergy Analysis and Genetic Algorithm
Authors: R. Akbari, M. A. Ehyaei, R. Shahi Shavvon
Abstract:
Nowadays, solar energy is used for energy purposes such as the use of thermal energy for domestic, industrial and power applications, as well as the conversion of the sunlight into electricity by photovoltaic cells. In this study, the thermodynamic simulation of the solar Rankin cycle with phase change material (paraffin) was first studied. Then energy and exergy analyses were performed. For optimization, a single and multi-objective genetic optimization algorithm to maximize thermal and exergy efficiency was used. The parameters discussed in this paper included the effects of input pressure on turbines, input mass flow to turbines, the surface of converters and collector angles on thermal and exergy efficiency. In the organic Rankin cycle, where solar energy is used as input energy, the fluid selection is considered as a necessary factor to achieve reliable and efficient operation. Therefore, silicon oil is selected for a high-temperature cycle and water for a low-temperature cycle as an operating fluid. The results showed that increasing the mass flow to turbines 1 and 2 would increase thermal efficiency, while it reduces and increases the exergy efficiency in turbines 1 and 2, respectively. Increasing the inlet pressure to the turbine 1 decreases the thermal and exergy efficiency, and increasing the inlet pressure to the turbine 2 increases the thermal efficiency and exergy efficiency. Also, increasing the angle of the collector increased thermal efficiency and exergy. The thermal efficiency of the system was 22.3% which improves to 33.2 and 27.2% in single-objective and multi-objective optimization, respectively. Also, the exergy efficiency of the system was 1.33% which has been improved to 1.719 and 1.529% in single-objective and multi-objective optimization, respectively. These results showed that the thermal and exergy efficiency in a single-objective optimization is greater than the multi-objective optimization.Keywords: exergy analysis, genetic algorithm, rankine cycle, single and multi-objective function
Procedia PDF Downloads 1471195 An Exploratory Approach of the Latin American Migrants’ Urban Space Transformation of Antofagasta City, Chile
Authors: Carolina Arriagada, Yasna Contreras
Abstract:
Since mid-2000, the migratory flows of Latin American migrants to Chile have been increasing constantly. There are two reasons that would explain why Chile is presented as an attractive country for the migrants. On the one hand, traditional centres of migrants’ attraction such as the United States and Europe have begun to close their borders. On the other hand, Chile exhibits relative economic and political stability, which offers greater job opportunities and better standard of living when compared to the migrants’ origin country. At the same time, the neoliberal economic model of Chile, developed under an extractive production of the natural resources, has privatized the urban space. The market regulates the growth of the fragmented and segregated cities. Then, the vulnerable population, most of the time, is located in the periphery and in the marginal areas of the urban space. In this aspect, the migrants have begun to occupy those degraded and depressed areas of the city. The problem raised is that the increase of the social spatial segregation could be also attributed to the migrants´ occupation of the marginal urban places of the city. The aim of this investigation is to carry out an analysis of the migrants’ housing strategies, which are transforming the marginal areas of the city. The methodology focused on the urban experience of the migrants, through the observation of spatial practices, ways of living and networks configuration in order to transform the marginal territory. The techniques applied in this study are semi–structured interviews in-depth interviews. The study reveals that the migrants housing strategies for living in the marginal areas of the city are built on a paradox way. On the one hand, the migrants choose proximity to their place of origin, maintaining their identity and customs. On the other hand, the migrants choose proximity to their social and familiar places, generating sense of belonging. In conclusion, the migration as international displacements under a globalized economic model increasing socio spatial segregation in cities is evidenced, but the transformation of the marginal areas is a fundamental resource of their integration migratory process. The importance of this research is that it is everybody´s responsibility not only the right to live in a city without any discrimination but also to integrate the citizens within the social urban space of a city.Keywords: migrations, marginal space, resignification, visibility
Procedia PDF Downloads 1421194 Assessing the Suitability of South African Waste Foundry Sand as an Additive in Clay Masonry Products
Authors: Nthabiseng Portia Mahumapelo, Andre van Niekerk, Ndabenhle Sosibo, Nirdesh Singh
Abstract:
The foundry industry generates large quantities of solid waste in the form of waste foundry sand. The ever-increasing quantities of this type of industrial waste put pressure on land-filling space and its proper management has become a global concern. The South African foundry industry is not different when it comes to this solid waste generation. Utilizing the foundry waste sand in other applications has become an attractive avenue to deal with this waste stream. In the present paper, an evaluation was done on the suitability of foundry waste sand as an additive in clay masonry products. Purchased clay was added to the foundry waste sand sample in a 50/50 ratio. The mixture was named FC sample. The FC sample was mixed with water in a pan mixer until the mixture was consistent and suitable for extrusion. The FC sample was extruded and cut into briquettes. Water absorption, shrinkage and modulus of rupture tests were conducted on the resultant briquettes. Foundry waste sand and FC samples were respectively characterized mineralogically using X-Ray Diffraction, and the major and trace elements were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. Adding purchased clay to the foundry waste sand positively influenced the workability of the test sample. Another positive characteristic was the low linear shrinkage, which indicated that products manufactured from the FC sample would not be susceptible to cracking. The water absorption values were acceptable and the unfired and fired strength values of the briquette’s samples were acceptable. In conclusion, tests showed that foundry waste sand can be used as an additive in masonry clay bricks, provided it is blended with good quality clay.Keywords: foundry waste sand, masonry clay bricks, modulus of rupture, shrinkage
Procedia PDF Downloads 2301193 Reading High Rise Residential Development in Istanbul on the Theory of Globalization
Authors: Tuba Sari
Abstract:
One of the major transformations caused by the industrial revolution, technological developments and globalization is undoubtedly acceleration of urbanization process. Globalization, in particular, is one of the major factors that trigger this transformation. In this context, as a result of the global metropolitan city system, multifunctional rising structure forms are becoming undeniable fact of the world’s leading metropolises as the manifestation of prestige and power with different life choices, easy accessibility to services related to the era of technology. The scope of research deals with five different urban centers in İstanbul where high-rise housing is increasing dramatically after 2000’s. Therefore, the research regards multi-centered urban residential pattern being created by high-rise housing structures in the city. The methodology of the research is based on two main issue, one of them is related to sampling method of high-rise housing projects in İstanbul, while the other method of the research is based on the model of Semantics. In the framework of research hypothesis, it is aimed to prove that the character of vertical intensive structuring in Istanbul is based on seeking of different forms and images in the expressive quality, considering the production of existing high-rise buildings in residential areas in recent years. In respect to rising discourse of 'World City' in the globalizing world, it is very important to state the place of Istanbul in other developing world metropolises. In the perspective of 'World City' discourse, Istanbul has different projects concerning with globalization, international finance companies, cultural activities, mega projects, etc. In brief, the aim of this research is examining transformation forms of high-rise housing development in Istanbul within the frame of developing world cities, searching and analyzing discourse and image related to these projects.Keywords: globalization, high-rise, housing, image
Procedia PDF Downloads 2841192 Development of a Journal over 20 Years: Citation Analysis
Authors: Byung Lee, Charles Perschau
Abstract:
This study analyzes the development of a communication journal, the Journal of Advertising Education (JAE) over the past 20 years by examining citations of all research articles there. The purpose of a journal is to offer a stable and transparent forum for the presentation, scrutiny, and discussion of research in a targeted domain. This study asks whether JAE has fulfilled this purpose. The authors and readers who are involved in a journal need to have common research topics of their interest. In the case of the discipline of communication, scholars have a variety of backgrounds beyond communication itself since the social scientific study of communication is a relatively recent development, one that emerged after World War II, and the discipline has been heavily indebted to other social sciences, such as psychology, sociology, social psychology, and political science. When authors impart their findings and knowledge to others, their work is not done in isolation. They have to stand on previous studies, which are listed as sources in the bibliography. Since communication has heavily piggybacked on other disciplines, cited sources should be as diverse as the resources it taps into. This paper analyzes 4,244 articles that were cited by JAE articles in the past 36 issues. Since journal article authors reveal their intellectual linkage by using bibliographic citations, the analysis of citations in journal articles will reveal various networks of relationships among authors, journal types, and fields in an objective and quantitative manner. The study found that an easier access to information sources because of the development of electronic databases and the growing competition among scholars for publication seemed to influence authors to increase the number of articles cited even though some variations existed during the examined period. The types of articles cited have also changed. Authors have more often cited journal articles, periodicals (most of them available online), and web site sources, while decreased their dependence on books, conference papers, and reports. To provide a forum for discussion, a journal needs a common topic or theme. This can be realized when an author writes an article about a topic, and that article is cited and discussed in another article. Thus, the citation of articles in the same journal is vital for a journal to form a forum for discussion. JAE has gradually increased the citations of in-house articles with a few fluctuations over the years. The study also examines not only specific articles that are often cited, but also specific authors often cited. The analysis of citations in journal articles shows how JAE has developed into a full academic journal while offering a communal forum even though the speed of its formation is not as fast as desired probably because of its interdisciplinary nature.Keywords: citation, co-citation, the Journal of Advertising Education, development of a journal
Procedia PDF Downloads 1551191 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 3161190 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1041189 Microstructure Analysis of TI-6AL-4V Friction Stir Welded Joints
Authors: P. Leo, E. Cerri, L. Fratini, G. Buffa
Abstract:
The Friction Stir Welding process uses an inert rotating mandrel and a force on the mandrel normal to the plane of the sheets to generate the frictional heat. The heat and the stirring action of the mandrel create a bond between the two sheets without melting the base metal. As matter of fact, the use of a solid state welding process limits the insurgence of defects, due to the presence of gas in melting bath, and avoids the negative effects of materials metallurgical transformation strictly connected with the change of phase. The industrial importance of Ti-6Al-4V alloy is well known. It provides an exceptional good balance of strength, ductility, fatigue and fracture properties together with good corrosion resistance and good metallurgical stability. In this paper, the authors analyze the microstructure of friction stir welded joints of Ti-6Al-4V processed at the same travel speed (35 mm/min) but at different rotation speeds (300-500 rpm). The microstructure of base material (BM), as result from both optical microscope and scanning electron microscope analysis is not homogenous. It is characterized by distorted α/β lamellar microstructure together with smashed zone of fragmented β layer and β retained grain boundary phase. The BM has been welded in the-as received state, without any previous heat treatment. Even the microstructure of the transverse and longitudinal sections of joints is not homogeneous. Close to the top of weld cross sections a much finer microstructure than the initial condition has been observed, while in the center of the joints the microstructure is less refined. Along longitudinal sections, the microstructure is characterized by equiaxed grains and lamellae. Both the length and area fraction of lamellas increases with distance from longitudinal axis. The hardness of joints is higher than that of BM. As the process temperature increases the average microhardness slightly decreases.Keywords: friction stir welding, microhardness, microstructure, Ti-6Al-4V
Procedia PDF Downloads 3811188 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 761187 Fused Deposition Modelling as the Manufacturing Method of Fully Bio-Based Water Purification Filters
Authors: Natalia Fijol, Aji P. Mathew
Abstract:
We present the processing and characterisation of three-dimensional (3D) monolith filters based on polylactic acid (PLA) reinforced with various nature-derived nanospecies such as hydroxyapatite, modified cellulose fibers and chitin fibers. The nanospecies of choice were dispersed in PLA through Thermally Induced Phase Separation (TIPS) method. The biocomposites were developed via solvent-assisted blending and the obtained pellets were further single-screw extruded into 3D-printing filaments and processed into various geometries using Fused Deposition Modelling (FDM) technique. The printed prototypes included cubic, cylindrical and hour-glass shapes with diverse patterns of printing infill as well as varying pore structure including uniform and multiple level gradual pore structure. The pores and channel structure as well as overall shape of the prototypes were designed in attempt to optimize the flux and maximize the adsorption-active time. FDM is a cost and energy-efficient method, which does not require expensive tools and elaborated post-processing maintenance. Therefore, FDM offers the possibility to produce customized, highly functional water purification filters with tuned porous structures suitable for removal of wide range of common water pollutants. Moreover, as 3D printing becomes more and more available worldwide, it allows producing portable filters at the place and time where they are most needed. The study demonstrates preparation route for the PLA-based, fully biobased composite and their processing via FDM technique into water purification filters, addressing water treatment challenges on an industrial scale.Keywords: fused deposition modelling, water treatment, biomaterials, 3D printing, nanocellulose, nanochitin, polylactic acid
Procedia PDF Downloads 1151186 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance
Authors: Jana Soler Libran
Abstract:
The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth
Procedia PDF Downloads 1241185 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 681184 Optimization of Enzymatic Hydrolysis of Cooked Porcine Blood to Obtain Hydrolysates with Potential Biological Activities
Authors: Miguel Pereira, Lígia Pimentel, Manuela Pintado
Abstract:
Animal blood is a major by-product of slaughterhouses and still represents a cost and environmental problem in some countries. To be eliminated, blood should be stabilised by cooking and afterwards the slaughterhouses must have to pay for its incineration. In order to reduce the elimination costs and valorise the high protein content the aim of this study was the optimization of hydrolysis conditions, in terms of enzyme ratio and time, in order to obtain hydrolysates with biological activity. Two enzymes were tested in this assay: pepsin and proteases from Cynara cardunculus (cardosins). The latter has the advantage to be largely used in the Portuguese Dairy Industry and has a low price. The screening assays were carried out in a range of time between 0 and 10 h and using a ratio of enzyme/reaction volume between 0 and 5%. The assays were performed at the optimal conditions of pH and temperature for each enzyme: 55 °C at pH 5.2 for cardosins and 37 °C at pH 2.0 for pepsin. After reaction, the hydrolysates were evaluated by FPLC (Fast Protein Liquid Chromatography) and tested for their antioxidant activity by ABTS method. FPLC chromatograms showed different profiles when comparing the enzymatic reactions with the control (no enzyme added). The chromatogram exhibited new peaks with lower MW that were not present in control samples, demonstrating the hydrolysis by both enzymes. Regarding to the antioxidant activity, the best results for both enzymes were obtained using a ratio enzyme/reactional volume of 5% during 5 h of hydrolysis. However, the extension of reaction did not affect significantly the antioxidant activity. This has an industrial relevant aspect in what concerns to the process cost. In conclusion, the enzymatic blood hydrolysis can be a better alternative to the current elimination process allowing to the industry the reuse of an ingredient with biological properties and economic value.Keywords: antioxidant activity, blood, by-products, enzymatic hydrolysis
Procedia PDF Downloads 5091183 Studies of Carbohydrate, Antioxidant, Nutrient and Genomic DNA Characterization of Fresh Olive Treated with Alkaline and Acidic Solvent: An Innovation
Authors: A. B. M. S. Hossain, A. Abdelgadir, N. A. Ibrahim
Abstract:
Fresh ripen olive cannot be consumed immediately after harvest due to the excessive bitterness having polyphenol as antioxidant. Industrial processing needs to be edible the fruit. The laboratory processing technique has been used to make it edible by using acid (vinegar, 5% acetic acid) and alkaline solvent (NaOH). Based on the treatment and consequence, innovative data have been found in this regard. The experiment was conducted to investigate biochemical content, nutritional and DNA characterization of olive fruit treated with alkaline (Sodium chloride anhydrous) and acidic solvent (5% acetic acid, vinegar). The treatments were used as control (no water), water control, 10% sodium chloride anhydrous (NaOH), vinegar (5% acetic acid), vinegar + NaOH and vinegar + NaOH + hot water treatment. Our results showed that inverted sugar and glucose content were higher in the vinegar and NaOH treated olive than in other treatments. Fructose content was the highest in vinegar + NaOH treated fruit. Nutrient contents NO3 K, Ca and Na were found higher in the treated fruit than the control fruit. Moreover, maximum K content was observed in the case of all treatments compared to the other nutrient content. The highest acidic (lower pH) condition (sour) was found in treated fruit. DNA yield was found higher in water control than acid and alkaline treated olives. DNA band was wider in the olive treated water control compared to the NaOH, vinegar, vinegar + NaOH and vinegar + NaOH + Hot water treatment. Finally, results suggest that vinegar + NaOH treated olive fruit was the best for fresh olive homemade processing after harvesting for edible purpose.Keywords: olive, vinegar, sugars, DNA band, bioprocess biotechnology
Procedia PDF Downloads 1851182 Constructing the Cult of the Self: On White, Working-Class Males and the Neoliberalisation of Identities: An Autoethnographic Study
Authors: Dane B. Norris
Abstract:
This paper offers a reflective and reflexive examination of the lived reality of a group of young white, working-class males engaging in secondary education in England at a time when this population is widely recognised as the lowest attaining ethnic group within British schools. The focus of the paper is an exploration of the development of identities and aspirations alongside contemporary demographic shifts in the British population within the intersection of neoliberal education policies and the emerging ideological conflict between identity conservatism and liberalism. The construction and performance of intersecting social-class, gender, ethnic and national identities are considered, as well as the process through which socially constructed narratives inform identities and aspirations. Evocative autoethnography is then employed to offer reflections on working-class habitus and, in particular, classed and gendered codes that underpin expectations of manhood in post-industrial culture within an education system which seemingly requires the abandonment of aspects of a working-class background, affiliation, and identity. Findings from the study identify the emergence of a culture of hyper-individualisation amongst white, working-class males in schools and a belief in the meritocratic ideologies of the New Right. In particular, the breakdown of the social contract, including notions of political and civic responsibility, coupled with the symbolic violence perpetrated against working-class culture and solidarity in British schools, have all informed the construction of working-class masculinity which values the individual entrepreneur over the collective and depoliticizes students to an extent where a focus on the spectacle and performance of success has replaced individual and collective investment in community.Keywords: education, identity, masculinity, neoliberalism, working-class
Procedia PDF Downloads 1011181 Effect of Globalization on Flow Performance in Godean Jathilan Pranesa Yogyakarta
Authors: Maria Armalita Tumimbang
Abstract:
Jathilan or Kuda Lumping is a dance-drama with warfare as the main theme and the dancers mimicking mighty horsemen armed with sword in the middle of the battle field. However, to most people this dance-drama is more identical with magical nuanced dance and trance, beside the attractive and even dangerous acts of the dancers, such as eating shard or broken glass in a state of trance. Several music players play the accompaniment made up of incomplete gamelan set that include saron, kendang, gong, and kempul. In general, it remains unchanged with regards to the seemingly monotonous beat and occasional “bumps” that may lead the dancers into a trance state. The dances performed also tend to be of repetitive patterns. The development of Jathilan and other traditional art performance in this globalization and industrialization era can be divided into two: firstly, they are subjected to the power of industrialization, which means their performances are to be recorded for commercial purpose, and secondly, they are to be presented in live performances. To some people, live performances are preferable, and for some reasons, they represent a form of cultural résistance to globalization and industrialization. The present study is qualitative in nature. It aims to describe the music and performance of Jathilan in the era of globalization in Indonesia. The subject of this study is a traditional art group, Jathilan Kuda Pranesa of Godean, Yogyakarta. Data collection was conducted by interviews with the leader of the group, the dancers and music players, as well as the audience. The wave of globalization has brought strong capitalistic industrialization that render traditional arts simply into industrial commodities tailored to the need of the era. This very fact has made the repositioning of traditional art performance of Jathilan a necessity. And by repositioning we mean that Jathilans should be put back to their traditional forms and functions as they used to be.Keywords: Jathilan, globalization, industrialization, music, performance
Procedia PDF Downloads 3041180 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 861179 New Platform of Biobased Aromatic Building Blocks for Polymers
Authors: Sylvain Caillol, Maxence Fache, Bernard Boutevin
Abstract:
Recent years have witnessed an increasing demand on renewable resource-derived polymers owing to increasing environmental concern and restricted availability of petrochemical resources. Thus, a great deal of attention was paid to renewable resources-derived polymers and to thermosetting materials especially, since they are crosslinked polymers and thus cannot be recycled. Also, most of thermosetting materials contain aromatic monomers, able to confer high mechanical and thermal properties to the network. Therefore, the access to biobased, non-harmful, and available aromatic monomers is one of the main challenges of the years to come. Starting from phenols available in large volumes from renewable resources, our team designed platforms of chemicals usable for the synthesis of various polymers. One of these phenols, vanillin, which is readily available from lignin, was more specifically studied. Various aromatic building blocks bearing polymerizable functions were synthesized: epoxy, amine, acid, carbonate, alcohol etc. These vanillin-based monomers can potentially lead to numerous polymers. The example of epoxy thermosets was taken, as there is also the problematic of bisphenol A substitution for these polymers. Materials were prepared from the biobased epoxy monomers obtained from vanillin. Their thermo-mechanical properties were investigated and the effect of the monomer structure was discussed. The properties of the materials prepared were found to be comparable to the current industrial reference, indicating a potential replacement of petrosourced, bisphenol A-based epoxy thermosets by biosourced, vanillin-based ones. The tunability of the final properties was achieved through the choice of monomer and through a well-controlled oligomerization reaction of these monomers. This follows the same strategy than the one currently used in industry, which supports the potential of these vanillin-derived epoxy thermosets as substitutes of their petro-based counterparts.Keywords: lignin, vanillin, epoxy, amine, carbonate
Procedia PDF Downloads 2321178 Relationship of Macro-Concepts in Educational Technologies
Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez
Abstract:
This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language
Procedia PDF Downloads 1991177 Partial Replacement for Cement and Coarse Aggregate in Concrete by Using Egg Shell Powder and Coconut Shell
Authors: A. K. Jain, M. C. Paliwal
Abstract:
The production of cement leads to the emission of large amounts of carbon-dioxide gas into the atmosphere which is a major contributor for the greenhouse effect and the global warming; hence it is mandatory either to quest for another material or partly replace it with some other material. According to the practical demonstrations and reports, Egg Shell Powder (ESP) can be used as a binding material for different field applications as it contains some of the properties of lime. It can partially replace the cement and further; it can be used in different proportion for enhancing the performance of cement. It can be used as a first-class alternative, for material reuse and waste recycling practices. Eggshell is calcium rich and analogous to limestone in chemical composition. Therefore, use of eggshell waste for partial replacement of cement in concrete is feasible. Different studies reveal that plasticity index of the soil can be improved by adding eggshell wastes in all the clay soil and it has wider application in construction projects including earth canals and earthen dams. The scarcity of aggregates is also increasing nowadays. Utilization of industrial waste or secondary materials is increasing in different construction applications. Coconut shell was successfully used in the construction industry for partial or full replacement for coarse aggregates. The use of coconut shell gives advantage of using waste material to partially replace the coarse aggregate. Studies carried on coconut shell indicate that it can partially replace the aggregate. It has good strength and modulus properties along with the advantage of high lignin content. It absorbs relatively low moisture due to its low cellulose content. In the paper, study carried out on eggshell powder and coconut shell will be discussed. Optimum proportions of these materials to be used for partial replacement of cement and aggregate will also be discussed.Keywords: greenhouse, egg shell powder, binding material, aggregates, coconut shell, coarse aggregates
Procedia PDF Downloads 2531176 Puerto Rico and Pittsburg: A Social Psychology Perspective on How Perceived Infringement on Job and Cultural Identity Unite Racially Different Working-Class Groups
Authors: Reagan Rodriguez
Abstract:
With a growing divide between political echo chambers in the United States, exacerbated by race and income inequality, it might seem to be unfathomable to draw connections that tie working class in an industrial city and a U.S. territory. Yet, in regions where either the economy has been hit due to dwindling job infrastructure or natural disasters have left indelible marks on an island already once marked by colonial imperialism, a larger social shared identity is at play. Fracking has long been an intergenerational and stable work opportunity for many in the Pittsburg PA, yet the rising severity of global climate change may soon impact the policy and even presidential elections which could result in the reduction of jobs in the industry. Cock-fighting, considered a cultural mainstay within the island of Puerto Rico, has already had legislation banning activity and thus cutting out one of the most lucrative aspects of a severely injured economy. Insecurity, infringement, and isolation while being tied to a working-class bracket with no other opportunities in proximity have left both groups expressing similar frustration and while another larger shared identity politic is giving little other options to develop social mobility. This paper utilizes a thematic analysis and compares convergent and divergent themes on internet forums amongst unionized fracking workers in Pittsburg and cockfighters in Puerto Rico. This research examines how group identity in relation to job and cultural identity is most strong and at which points its most malleable; when intergenerational job identity becomes a part of one’s cultural identity, its override may be strongest when it is perceived as threatened. Final findings and limitations were comprehensively outlined.Keywords: identity threat, social psychology, group identity, culture and social mobility
Procedia PDF Downloads 1501175 Gearbox Defect Detection in the Semi Autogenous Mills Using the Vibration Analysis Technique
Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi
Abstract:
Semi autogenous mills are designed for grinding or primary crushed ore, and are the most widely used in concentrators globally. Any defect occurrence in semi autogenous mills can stop the production line. A Gearbox is a significant part of a rotating machine or a mill, so, the gearbox monitoring is a necessary process to prevent the unwanted defects. When a defect happens in a gearbox bearing, this defect can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. Vibration analysis is one of the most effective and common ways to detect the bearing defects in the mills. Vibration signal in a mill can be made by different parts of the mill including electromotor, pinion girth gear, different rolling bearings, and tire. When a vibration signal, made by the aforementioned parts, is added to the gearbox vibration spectrum, an accurate and on time defect detection in the gearbox will be difficult. In this paper, a new method is proposed to detect the gearbox bearing defects in the semi autogenous mill on time and accurately, using the vibration signal analysis method. In this method, if the vibration values are increased in the vibration curve, the probability of defect occurrence is investigated by comparing the equipment vibration values and the standard ones. Then, all vibration frequencies are extracted from the vibration signal and the equipment defect is detected using the vibration spectrum curve. This method is implemented on the semi autogenous mills in the Golgohar mining and industrial company in Iran. The results show that the proposed method can detect the bearing looseness on time and accurately. After defect detection, the bearing is opened before the equipment failure and the predictive maintenance actions are implemented on it.Keywords: condition monitoring, gearbox defects, predictive maintenance, vibration analysis
Procedia PDF Downloads 4651174 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.Keywords: computer vision, drone control, keypoint detection, openpose
Procedia PDF Downloads 1841173 Corporate Water Footprint Assessment: The Case of Tata Steel
Authors: Sujata Mukherjee, Arunavo Mukherjee
Abstract:
Water covers 70 per cent of our planet; however, freshwater is incredibly rare, and scarce has been listed as the highest impact global risk. The problems related to freshwater scarcity multiplies with the human population having more than doubled coupled with climate change, changing water cycles leading to droughts and floods and a rise in water pollution. Businesses, governments, and local communities are constrained by water scarcity and are facing growing challenges to their growth and sustainability. Water foot printing as an indicator for water use was introduced in 2002. Business water footprint measures the total water consumed to produce the goods and services it provides. It is a combination of the water that goes into the production and manufacturing of a product or service and the water used throughout the supply chain, as well as during the use of the product. A case study approach was applied describing the efforts of Tata Steel. It is based on a series of semi-structured in-depth interviews with top executives of the company as well as observation and content analysis of internal and external documents about the company’s efforts in sustainable water management. Tata Steel draws water required for industrial use from surface water sources, primarily perennial rivers and streams, internal reservoirs and water from municipal sources. The focus of the present study was to explore Tata Steel’s engagement in sustainable water management focusing on water foot printing accounting as a tool to account for water use in the steel supply chain at its Jamshedpur plant. The findings enabled the researchers to conclude that no sources of water are adversely affected by the company’s production of steel at Jamshedpur.Keywords: sustainability, corporate responsibility water management, risk management, business engagement
Procedia PDF Downloads 2731172 Studies on the Use of Sewage Sludge in Agriculture or in Incinerators
Authors: Catalina Iticescu, Lucian Georgescu, Mihaela Timofti, Dumitru Dima, Gabriel Murariu
Abstract:
The amounts of sludge resulting from the treatment of domestic and industrial wastewater can create serious environmental problems if no solutions are found to eliminate them. At present, the predominant method of sewage sludge disposal is to store and use them in agricultural applications. The sewage sludge has fertilizer properties and can be used to enrich agricultural soils due to the nutrient content. In addition to plant growth (nitrogen and phosphorus), the sludge also contains heavy metals in varying amounts. An increasingly used method is the incineration of sludge. Thermal processes can be used to convert large amounts of sludge into useful energy. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), nutrients and heavy metals. The determination methods were electrochemical, spectrophotometric and energy dispersive X–ray analyses (EDX). The results of the tests made on the content of nutrients in the sewage sludge have shown that existing nutrients can be used to increase the fertility of agricultural soils. The conclusion reached was that these sludge can be safely used on agricultural land and with good agricultural productivity results. To be able to use sewage sludge as a fuel, we need to know its calorific values. For wet sludge, the caloric power is low, while for dry sludge it is high. Higher calorific value and lower calorific value are determined only for dry solids. The apparatus used to determine the calorific power was a Parr 6755 Solution Calorimeter Calorimeter (Parr Instrument Company USA 2010 model). The calorific capacities for the studied sludge indicate that they can be used successfully in incinerators. Mixed with coal, they can also be used to produce electricity. The advantages are: it reduces the cost of obtaining electricity and considerably reduces the amount of sewage sludge.Keywords: agriculture, incinerators, properties, sewage sludge
Procedia PDF Downloads 171