Search results for: fruits processing
136 Sustainability in Space: Implementation of Circular Economy and Material Efficiency Strategies in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
The ultimate aim of space exploration has been centralized around the possibility of life on other planets in the solar system. This aim is driven by the detrimental effects that climate change could potentially have on human survival on Earth in the future. This drives humans to search for feasible solutions to increase environmental and economical sustainability on Earth and to evaluate and explore the ability of human survival on other planets such as Mars. To do that, frequent space missions are required to meet the ambitious human goals. This means that reliable and affordable access to space is required, which could be largely achieved through the use of reusable spacecrafts. Therefore, materials and resources must be used wisely to meet the increasing demand. Space missions are currently extremely expensive to operate. However, reusing materials hence spacecrafts, can potentially reduce overall mission costs as well as the negative impact on both space and Earth environments. This is because reusing materials leads to less waste generated per mission, and therefore fewer landfill sites are required. Reusing materials reduces resource consumption, material production, and the need for processing new and replacement spacecraft and launch vehicle parts. Consequently, this will ease and facilitate human access to outer space as it will reduce the demand for scarce resources, which will boost material efficiency in the space industry. Material efficiency expresses the extent to which resources are consumed in the production cycle and how the waste produced by the industrial process is minimized. The strategies proposed in this paper to boost material efficiency in the space sector are the introduction of key performance indicators that are able to measure material efficiency as well as the introduction of clearly defined policies and legislation that can be easily implemented within the general practices in the space industry. Another strategy to improve material efficiency is by amplifying energy and resource efficiency through reusing materials. The circularity of various spacecraft materials such as Kevlar, steel, and aluminum alloys could be maximized through reusing them directly or after galvanizing them with another layer of material to act as a protective coat. This research paper has an aim to investigate and discuss how to improve material efficiency in space missions considering circular economy concepts so that space and Earth become more economically and environmentally sustainable. The circular economy is a transition from a make-use-waste linear model to a closed-loop socio-economic model, which is regenerative and restorative in nature. The implementation of a circular economy will reduce waste and pollution through maximizing material efficiency, ensuring that businesses can thrive and sustain. Further research into the extent to which reusable launch vehicles reduce space mission costs have been discussed, along with the environmental and economic implications it could have on the space sector and the environment. This has been examined through research and in-depth literature review of published reports, books, scientific articles, and journals. Keywords such as material efficiency, circular economy, reusable launch vehicles and spacecraft materials were used to search for relevant literature.Keywords: circular economy, key performance indicator, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 125135 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear
Authors: Grant Bifolchi
Abstract:
As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology
Procedia PDF Downloads 96134 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 243133 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications
Authors: António J. Gano, Carmen Rangel
Abstract:
Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS
Procedia PDF Downloads 100132 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter
Authors: Jiri Cecrdle
Abstract:
This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator
Procedia PDF Downloads 96131 [Keynote Talk]: Surveillance of Food Safety Compliance of Hong Kong Street Food
Authors: Mabel Y. C. Yau, Roy C. F. Lai, Hugo Y. H. Or
Abstract:
This study is a pilot surveillance of hygiene compliance and food microbial safety of both licensed and mobile vendors selling Chinese ready–to-eat snack foods in Hong Kong. The study reflects similar situations in running mobile food vending business on trucks. Hong Kong is about to launch the Food Truck Pilot Scheme by the end of 2016 or early 2017. Technically, selling food on the vehicle is no different from hawking food on the street or vending food on the street. Each type of business bears similar food safety issues and cast the same impact on public health. Present findings demonstrate exemplarily situations that also apply to food trucks. 9 types of Cantonese style snacks of 32 samples in total were selected for microbial screening. A total of 16 vending sites including supermarkets, street markets, and snack stores were visited. The study finally focused on a traditional snack, the steamed rice cake with red beans called Put Chai Ko (PCK). PCK is a type of classical Cantonese pastry sold on push carts on the street. It used to be sold at room temperature and served with bamboo sticks in the old days. Some shops would have them sold steam fresh. Microbial examinations on aerobic counts, yeast, and mould, coliform, salmonella as well as Staphylococcus aureus detections were carried out. Salmonella was not detected in all samples. Since PCK does not contain ingredients of beef, poultry, eggs or dairy products, the risk of the presence of Salmonella in PCK was relatively lower although other source of contamination might be possible. Coagulase positive Staphylococcus aureus was found in 6 of the 14 samples sold at room temperature. Among these 6 samples, 3 were PCK. One of the samples was in an unacceptable range of total colony forming units higher than 105. The rest were only satisfactory. Observational evaluations were made with checklists on personal hygiene, premises hygiene, food safety control, food storage, cleaning and sanitization as well as waste disposals. The maximum score was 25 if total compliance were obtained. The highest score among vendors was 20. Three stores were below average, and two of these stores were selling PCK. Most of the non-compliances were on food processing facilities, sanitization conditions and waste disposal. In conclusion, although no food poisoning outbreaks happened during the time of the investigation, the risk of food hazard existed in these stores, especially among street vendors. Attention is needed in the traditional practice of food selling, and that food handlers might not have sufficient knowledge to properly handle food products. Variations in food qualities existed among supply chains or franchise eateries or shops. It was commonly observed that packaging and storage conditions are not properly enforced in the retails. The same situation could be reflected across the food business. It did indicate need of food safety training in the industry and loopholes in quality control among business.Keywords: cantonese snacks, food safety, microbial, hygiene, street food
Procedia PDF Downloads 303130 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 149129 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump
Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani
Abstract:
The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump
Procedia PDF Downloads 136128 Portable Environmental Parameter Monitor Based on STM32
Authors: Liang Zhao, Chongquan Zhong
Abstract:
Introduction: According to statistics, people spend 80% to 90% of time indoor, so indoor air quality, either at home or in the office, greatly impacts the quality of life, health and work efficiency. Therefore, indoor air quality is very important to human activities. With the acceleration of urbanization, people are spending more time in indoor activity. The time in indoor environment, the living space, and the frequency interior decoration are all increasingly increased. However, housing decoration materials contain formaldehyde and other harmful substances, causing environmental and air quality problems, which have brought serious damage to countless families and attracted growing attention. According to World Health Organization statistics, the indoor environments in more than 30% of buildings in China are polluted by poisonous and harmful gases. Indoor pollution has caused various health problems, and these widespread public health problems can lead to respiratory diseases. Long-term inhalation of low-concentration formaldehyde would cause persistent headache, insomnia, weakness, palpitation, weight loss and vomiting, which are serious impacts on human health and safety. On the other hand, as for offices, some surveys show that good indoor air quality helps to enthuse the staff and improve the work efficiency by 2%-16%. Therefore, people need to further understand the living and working environments. There is a need for easy-to-use indoor environment monitoring instruments, with which users only have to power up and monitor the environmental parameters. The corresponding real-time data can be displayed on the screen for analysis. Environment monitoring should have the sensitive signal alarm function and send alarm when harmful gases such as formaldehyde, CO, SO2, are excessive to human body. System design: According to the monitoring requirements of various gases, temperature and humidity, we designed a portable, light, real-time and accurate monitor for various environmental parameters, including temperature, humidity, formaldehyde, methane, and CO. This monitor will generate an alarm signal when a target is beyond the standard. It can conveniently measure a variety of harmful gases and provide the alarm function. It also has the advantages of small volume, convenience to carry and use. It has a real-time display function, outputting the parameters on the LCD screen, and a real-time alarm function. Conclusions: This study is focused on the research and development of a portable parameter monitoring instrument for indoor environment. On the platform of an STM32 development board, the monitored data are collected through an external sensor. The STM32 platform is for data acquisition and processing procedures, and successfully monitors the real-time temperature, humidity, formaldehyde, CO, methane and other environmental parameters. Real-time data are displayed on the LCD screen. The system is stable and can be used in different indoor places such as family, hospital, and office. Meanwhile, the system adopts the idea of modular design and is superior in transplanting. The scheme is slightly modified and can be used similarly as the function of a monitoring system. This monitor has very high research and application values.Keywords: indoor air quality, gas concentration detection, embedded system, sensor
Procedia PDF Downloads 255127 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 109126 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 246125 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 161124 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 105123 When Ideological Intervention Backfires: The Case of the Iranian Clerical System’s Intervention in the Pandemic-Era Elementary Education
Authors: Hasti Ebrahimi
Abstract:
This study sheds light on the challenges and difficulties caused by the Iranian clerical system’s intervention in the country’s school education during the COVID-19 pandemic, when schools remained closed for almost two years. The pandemic brought Iranian elementary school education to a standstill for almost 6 months before the country developed a nationwide learning platform – a customized television network. While the initiative seemed to have been welcomed by the majority of Iranian parents, it resented some of the more traditional strata of the society, including the influential Friday Prayer Leaders who found the televised version of the elementary education ‘less spiritual’ and ‘more ‘material’ or science-based. That prompted the Iranian Channel of Education, the specialized television network that had been chosen to serve as a nationally televised school during the pandemic, to try to redefine much of its online elementary school educational content within the religious ideology of the Islamic Republic of Iran. As a result, young clergies appeared on the television screen as preachers of Islamic morality, religious themes and even sociology, history, and arts. The present research delves into the consequences of such an intervention, how it might have impacted the infrastructure of Iranian elementary education and whether or not the new ideology-infused curricula would withstand the opposition of students and mainstream teachers. The main methodology used in this study is Critical Discourse Analysis with a cognitive approach. It systematically finds and analyzes the alternative ideological structures of discourse in the Iranian Channel of Education from September 2021 to July 2022, when the clergy ‘teachers’ replaced ‘regular’ history and arts teachers on the television screen for the first time. It has aimed to assess how the various uses of the alternative ideological discourse in elementary school content have influenced the processes of learning: the acquisition of knowledge, beliefs, opinions, attitudes, abilities, and other cognitive and emotional changes, which are the goals of institutional education. This study has been an effort aimed at understanding and perhaps clarifying the relationships between the traditional textual structures and processing on the one hand and socio-cultural contexts created by the clergy teachers on the other. This analysis shows how the clerical portion of elementary education on the Channel of Education that seemed to have dominated the entire televised teaching and learning process faded away as the pandemic was contained and mainstream classes were restored. It nevertheless reflects the deep ideological rifts between the clerical approach to school education and the mainstream teaching process in Iranian schools. The semantic macrostructures of social content in the current Iranian elementary school education, this study suggests, have remained intact despite the temporary ideological intervention of the ruling clerical elite in their formulation and presentation. Finally, using thematic and schematic frameworks, the essay suggests that the ‘clerical’ social content taught on the Channel of Education during the pandemic cannot have been accepted cognitively by the channel’s target audience, including students and mainstream teachers.Keywords: televised elementary school learning, Covid 19, critical discourse analysis, Iranian clerical ideology
Procedia PDF Downloads 54122 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses
Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux
Abstract:
Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide
Procedia PDF Downloads 142121 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV
Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül
Abstract:
Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.Keywords: Multiplex, FIV, FeLV, FCoV, FIP
Procedia PDF Downloads 104120 Cultural Competence in Palliative Care
Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam
Abstract:
Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.Keywords: cultural competence, end-of-life care, hospice, palliative care
Procedia PDF Downloads 74119 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 94118 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 228117 Effectiveness of Participatory Ergonomic Education on Pain Due to Work Related Musculoskeletal Disorders in Food Processing Industrial Workers
Authors: Salima Bijapuri, Shweta Bhatbolan, Sejalben Patel
Abstract:
Ergonomics concerns the fitting of the environment and the equipment to the worker. Ergonomic principles can be employed in different dimensions of the industrial sector. Participation of all the stakeholders is the key to the formulation of a multifaceted and comprehensive approach to lessen the burden of occupational hazards. Taking responsibility for one’s own work activities by acquiring sufficient knowledge and potential to influence the practices and outcomes is the basis of participatory ergonomics and even hastens the process to identify workplace hazards. The study was aimed to check how participatory ergonomics can be effective in the management of work-related musculoskeletal disorders. Method: A mega kitchen was identified in a twin city of Karnataka, India. Consent was taken, and the screening of workers was done using observation methods. Kitchen work was structured to include different tasks, which included preparation, cooking, distributing, and serving food, packing food to be delivered to schools, dishwashing, cleaning and maintenance of kitchen and equipment, and receiving and storing raw material. Total 100 workers attended the education session on participatory ergonomics and its role in implementing the correct ergonomic practices, thus preventing WRMSDs. Demographic details and baseline data on related musculoskeletal pain and discomfort were collected using the Nordic pain questionnaire and VAS score pre- and post-study. Monthly visits were made, and the education sessions were reiterated on each visit, thus reminding, correcting, and problem-solving of each worker. After 9 months with a total of 4 such education session, the post education data was collected. The software SPSS 20 was used to analyse the collected data. Results: The majority of them (78%), depending on the availability and feasibility, participated in the intervention workshops were arranged four times. The average age of the participants was 39 years. The percentage of female participants was 79.49%, and 20.51% of participants comprised of males. The Nordic Musculoskeletal Questionnaire (NMQ) showed that knee pain was the most commonly reported complaint (62%) from the last 12 months with a mean VAS of 6.27, followed by low back pain. Post intervention, the mean VAS Score was reduced significantly to 2.38. The comparison of pre-post scores was made using Wilcoxon matched pairs test. Upon enquiring, it was found that, the participants learned the importance of applying ergonomics at their workplace which inturn was beneficial for them to handle any problems arising at their workplace on their own with self confidence. Conclusion: The participatory ergonomics proved effective with workers of mega kitchen, and it is a feasible and practical approach. The advantage of the given study area was that it had a sophisticated and ergonomically designed workstation; thus it was the lack of education and practical knowledge to use these stations was of utmost need. There was a significant reduction in VAS scores with the implementation of changes in the working style, and the knowledge of ergonomics helped to decrease physical load and improve musculoskeletal health.Keywords: ergonomic awareness session, mega kitchen, participatory ergonomics, work related musculoskeletal disorders
Procedia PDF Downloads 138116 Smart Services for Easy and Retrofittable Machine Data Collection
Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum
Abstract:
This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data
Procedia PDF Downloads 73115 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 164114 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems
Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo
Abstract:
Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic
Procedia PDF Downloads 138113 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography
Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq
Abstract:
Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury
Procedia PDF Downloads 70112 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 161111 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution
Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai
Abstract:
Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning
Procedia PDF Downloads 372110 The Role of Oral and Intestinal Microbiota in European Badgers
Authors: Emma J. Dale, Christina D. Buesching, Kevin R. Theis, David W. Macdonald
Abstract:
This study investigates the oral and intestinal microbiomes of wild-living European badgers (Meles meles) and will relate inter-individual differences to social contact networks, somatic and reproductive fitness, varying susceptibility to bovine tuberculous (bTB) and to the olfactory advertisement. Badgers are an interesting model for this research, as they have great variation in body condition, despite living in complex social networks and having access to the same resources. This variation in somatic fitness, in turn, affects breeding success, particularly in females. We postulate that microbiota have a central role to play in determining the successfulness of an individual. Our preliminary results, characterising the microbiota of individual badgers, indicate unique compositions of microbiota communities within social groups of badgers. This basal information will inform further questions related to the extent microbiota influence fitness. Hitherto, the potential role of microbiota has not been considered in determining host condition, but also other key fitness variables, namely; communication and resistance to disease. Badgers deposit their faeces in communal latrines, which play an important role in olfactory communication. Odour profiles of anal and subcaudal gland secretions are highly individual-specific and encode information about group-membership and fitness-relevant parameters, and their chemical composition is strongly dependent on symbiotic microbiota. As badgers sniff/ lick (using their Vomeronasal organ) and over-mark faecal deposits of conspecifics, these microbial communities can be expected to vary with social contact networks. However, this is particularly important in the context of bTB, where badgers are assumed to transmit bTB to cattle as well as conspecifics. Interestingly, we have found that some individuals are more susceptible to bTB than are others. As acquired immunity and thus potential susceptibility to infectious diseases are known to depend also on symbiotic microbiota in other members of the mustelids, a role of particularly oral microbiota can currently not be ruled out as a potential explanation for inter-individual differences in infection susceptibility of bTB in badgers. Tri annually badgers are caught in the context of a long-term population study that began in 1987. As all badgers receive an individual tattoo upon first capture, age, natal as well as previous and current social group-membership and other life history parameters are known for all animals. Swabs (subcaudal ‘scent gland’, anal, genital, nose, mouth and ear) and fecal samples will be taken from all individuals, stored at -80oC until processing. Microbial samples will be processed and identified at Wayne State University’s Theis (Host-Microbe Interactions) Lab, using High Throughput Sequencing (16S rRNA-encoding gene amplification and sequencing). Acknowledgments: Gas-Chromatography/ Mass-spectrometry (in the context of olfactory communication) analyses will be performed through an established collaboration with Dr. Veronica Tinnesand at Telemark University, Norway.Keywords: communication, energetics, fitness, free-ranging animals, immunology
Procedia PDF Downloads 187109 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites
Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis
Abstract:
Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing
Procedia PDF Downloads 151108 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad
Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood
Abstract:
Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.Keywords: CALPUFF, daura refinery, Iraq, pollutants
Procedia PDF Downloads 198107 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 104