Search results for: spatial audio processing
178 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 91177 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition
Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi
Abstract:
Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.Keywords: porous medium, power law fluids, surface heat flux, vertical wedge
Procedia PDF Downloads 312176 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances
Authors: Chris Thurlbourne
Abstract:
Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina
Procedia PDF Downloads 87175 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective
Authors: S. Fantin
Abstract:
This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.Keywords: cybersecurity, data protection, European Union, Japan
Procedia PDF Downloads 123174 Leadership Education for Law Enforcement Mid-Level Managers: The Mediating Role of Effectiveness of Training on Transformational and Authentic Leadership Traits
Authors: Kevin Baxter, Ron Grove, James Pitney, John Harrison, Ozlem Gumus
Abstract:
The purpose of this research is to determine the mediating effect of effectiveness of the training provided by Northwestern University’s School of Police Staff and Command (SPSC), on the ability of law enforcement mid-level managers to learn transformational and authentic leadership traits. This study will also evaluate the leadership styles, of course, graduates compared to non-attendees using a static group comparison design. The Louisiana State Police pay approximately $40,000 in salary, tuition, housing, and meals for each state police lieutenant attending the 10-week program of the SPSC. This school lists the development of transformational leaders as an increasing element. Additionally, the SPSC curriculum addresses all four components of authentic leadership - self-awareness, transparency, ethical/moral, and balanced processing. Upon return to law enforcement in roles of mid-level management, there are questions as to whether or not students revert to an “autocratic” leadership style. Insufficient evidence exists to support claims for the effectiveness of management training or leadership development. Though it is widely recognized that transformational styles are beneficial to law enforcement, there is little evidence that suggests police leadership styles are changing. Police organizations continue to hold to a more transactional style (i.e., most senior police leaders remain autocrats). Additionally, research in the application of transformational, transactional, and laissez-faire leadership related to police organizations is minimal. The population of the study is law enforcement mid-level managers from various states within the United States who completed leadership training presented by the SPSC. The sample will be composed of 66 active law enforcement mid-level managers (lieutenants and captains) who have graduated from SPSC and 65 active law enforcement mid-level managers (lieutenants and captains) who have not attended SPSC. Participants will answer demographics questions, Multifactor Leadership Questionnaire, Authentic Leadership Questionnaire, and the Kirkpatrick Hybrid Evaluation Survey. Analysis from descriptive statistics, group comparison, one-way MANCOVA, and the Kirkpatrick Evaluation Model survey will be used to determine training effectiveness in the four levels of reaction, learning, behavior, and results. Independent variables are SPSC graduates (two groups: upper and lower) and no-SPSC attendees, and dependent variables are transformational and authentic leadership scores. SPSC graduates are expected to have higher MLQ scores for transformational leadership traits and higher ALQ scores for authentic leadership traits than SPSC non-attendees. We also expect the graduates to rate the efficacy of SPSC leadership training as high. This study will validate (or invalidate) the benefits, costs, and resources required for leadership development from a nationally recognized police leadership program, and it will also help fill the gap in the literature that exists between law enforcement professional development and transformational and authentic leadership styles.Keywords: training effectiveness, transformational leadership, authentic leadership, law enforcement mid-level manager
Procedia PDF Downloads 105173 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding
Authors: Ines Oliveira, Ana Reis
Abstract:
Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation
Procedia PDF Downloads 211172 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications
Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray
Abstract:
The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model
Procedia PDF Downloads 129171 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 265170 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets
Authors: Basiru Amuneni
Abstract:
Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality
Procedia PDF Downloads 92169 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver
Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar
Abstract:
Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy
Procedia PDF Downloads 197168 Adopting Data Science and Citizen Science to Explore the Development of African Indigenous Agricultural Knowledge Platform
Authors: Steven Sam, Ximena Schmidt, Hugh Dickinson, Jens Jensen
Abstract:
The goal of this study is to explore the potential of data science and citizen science approaches to develop an interactive, digital, open infrastructure that pulls together African indigenous agriculture and food systems data from multiple sources, making it accessible and reusable for policy, research and practice in modern food production efforts. The World Bank has recognised that African Indigenous Knowledge (AIK) is innovative and unique among local and subsistent smallholder farmers, and it is central to sustainable food production and enhancing biodiversity and natural resources in many poor, rural societies. AIK refers to tacit knowledge held in different languages, cultures and skills passed down from generation to generation by word of mouth. AIK is a key driver of food production, preservation, and consumption for more than 80% of citizens in Africa, and can therefore assist modern efforts of reducing food insecurity and hunger. However, the documentation and dissemination of AIK remain a big challenge confronting librarians and other information professionals in Africa, and there is a risk of losing AIK owing to urban migration, modernisation, land grabbing, and the emergence of relatively small-scale commercial farming businesses. There is also a clear disconnect between the AIK and scientific knowledge and modern efforts for sustainable food production. The study combines data science and citizen science approaches through active community participation to generate and share AIK for facilitating learning and promoting knowledge that is relevant for policy intervention and sustainable food production through a curated digital platform based on FAIR principles. The study adopts key informant interviews along with participatory photo and video elicitation approach, where farmers are given digital devices (mobile phones) to record and document their every practice involving agriculture, food production, processing, and consumption by traditional means. Data collected are analysed using the UK Science and Technology Facilities Council’s proven methodology of citizen science (Zooniverse) and data science. Outcomes are presented in participatory stakeholder workshops, where the researchers outline plans for creating the platform and developing the knowledge sharing standard framework and copyrights agreement. Overall, the study shows that learning from AIK, by investigating what local communities know and have, can improve understanding of food production and consumption, in particular in times of stress or shocks affecting the food systems and communities. Thus, the platform can be useful for local populations, research, and policy-makers, and it could lead to transformative innovation in the food system, creating a fundamental shift in the way the North supports sustainable, modern food production efforts in Africa.Keywords: Africa indigenous agriculture knowledge, citizen science, data science, sustainable food production, traditional food system
Procedia PDF Downloads 82167 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes
Authors: Zubair Ahmed, Andrea Barbieri
Abstract:
The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence
Procedia PDF Downloads 121166 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study
Authors: Richard Renou, Laurent Soulard
Abstract:
Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.Keywords: densification, molecular dynamics simulations, shock loading, silica glass
Procedia PDF Downloads 222165 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes
Authors: Iris Vural Gursel, Andrea Ramirez
Abstract:
Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.Keywords: biorefinery, economic assessment, lignin conversion, process design
Procedia PDF Downloads 261164 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents
Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin
Abstract:
The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.Keywords: antimicrobial agent, chitosan, green production, nanoparticles
Procedia PDF Downloads 327163 Soybean Oil Based Phase Change Material for Thermal Energy Storage
Authors: Emre Basturk, Memet Vezir Kahraman
Abstract:
In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing
Procedia PDF Downloads 383162 The Effects of Goal Setting and Feedback on Inhibitory Performance
Authors: Mami Miyasaka, Kaichi Yanaoka
Abstract:
Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control
Procedia PDF Downloads 104161 From Victim to Ethical Agent: Oscar Wilde's The Ballad of Reading Gaol as Post-Traumatic Writing
Authors: Mona Salah El-Din Hassanein
Abstract:
Faced with a sudden, unexpected, and overwhelming event, the individual's normal cognitive processing may cease to function, trapping the psyche in "speechless terror", while images, feelings and sensations are experienced with emotional intensity. Unable to master such situation, the individual becomes a trauma victim who will be susceptible to traumatic recollections like intrusive thoughts, flashbacks, and repetitive re-living of the primal event in a way that blurs the distinction between past and present, and forecloses the future. Trauma is timeless, repetitious, and contagious; a trauma observer could fall prey to "secondary victimhood". Central to the process of healing the psychic wounds in the aftermath of trauma is verbalizing the traumatic experience (i.e., putting it into words) – an act which provides a chance for assimilation, testimony, and reevaluation. In light of this paradigm, this paper proposes a reading of Oscar Wilde's The Ballad of Reading Gaol, written shortly after his release from prison, as a post-traumatic text which traces the disruptive effects of the traumatic experience of Wilde's imprisonment for homosexual offences and the ensuing reversal of fortune he endured. Post-traumatic writing demonstrates the process of "working through" a trauma which may lead to the possibility of ethical agency in the form of a "survivor mission". This paper draws on fundamental concepts and key insights in literary trauma theory which is characterized by interdisciplinarity, combining the perspectives of different fields like critical theory, psychology, psychiatry, psychoanalysis, history, and social studies. Of particular relevance to this paper are the concepts of "vicarious traumatization" and "survivor mission", as The Ballad of Reading Gaol was written in response to Wilde's own prison trauma and the indirect traumatization he experienced as a result of witnessing the execution of a fellow prisoner whose story forms the narrative base of the poem. The Ballad displays Wilde's sense of mission which leads him to recognize the social as well as ethical implications of personal tragedy. Through a close textual analysis of The Ballad of Reading Gaol within the framework of literary trauma theory, the paper aims to: (a) demonstrate how the poem's thematic concerns, structure and rhetorical figures reflect the structure of trauma; (b) highlight Wilde's attempts to come to terms with the effects of the cataclysmic experience which transformed him into a social outcast; and (c) show how Wilde manages to transcend the victim status and assumes the role of ethical agent to voice a critique of the Victorian penal system and the standards of morality underlying the cruelties practiced against wrong doers and to solicit social action.Keywords: ballad of reading of reading, post-traumatic writing, trauma theory, Wilde
Procedia PDF Downloads 186160 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System
Authors: Masoud Mirzaee, Ghobad Behzadi Pour
Abstract:
An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure
Procedia PDF Downloads 249159 Understanding Patterns of Hard Coral Demographics in Kenyan Reefs to Inform Restoration
Authors: Swaleh Aboud, Mishal Gudka, David Obura
Abstract:
Background: Coral reefs are becoming increasingly vulnerable due to several threats ranging from climate change to overfishing. This has resulted in increased management and conservation efforts to protect reefs from degradation and facilitate recovery. Recruitmentof new individuals are isimportant in the recovery process and critical for the persistence of coral reef ecosystems. Local coral community structure can be influenced by successful recruit settlement, survival, and growth Understanding coral recruitment patterns can help quantify reef resilience and connectivity, establish baselines and track changes and evaluate the effectiveness of reef restoration and conservation efforts. This study will examine the abundance and spatial pattern of coral recruits and how this relates to adult community structure, including the distribution of thermal resistance and sensitive genera and their distribution in different management regimes. Methods: Coral recruit and demography surveys were conducted from 2020 to 2022, covering 35 sites in 19coral reef locations along the Kenyan coast. These included marine parks, reserves, community conservation areas (CMAs), and open access areas from the north (Marereni) to the south (Kisite) coast of Kenya and across different reef habitats. The data was collected through the underwater visual census (UVC) technique. We counted adult corals (>10 cm diameter)of23 selected genera using belt transects (25 by 1 m) and sampling of 1 m2 quadrat (at an interval of 5m) for all coloniesless than 10 cm diameter. The benthic cover was collected using photo quadrats. The surveys were only done during the northeast monsoon season. The data wereanalyzed using the R program to see the distribution patterns and the Kruskal Wallis test to see whether there was a significant difference. Spearman correlation was also applied to assess the relationship between the distribution of coral genera in recruits and adults. Results: A total of 44 different coral genera were recorded for recruits, ranging from 3at Marereni to 30at Watamu Marine Reserve. Recruit densities ranged from 1.2±1.5recruit m-2 (mean±SD) at Likoni to 10.3± 8.4 recruit m-2 at Kisite Marine Park. The overall densityof recruitssignificantly differed between reef locations, with Kisite Marine Park and Reserve and Likonihaving significantly large differences from all the other locations, while Vuma, Watamu, Malindi, and Kilifi had significantly lower differences from all the other locations. The recruit generadensity along the Kenya coastwas divided into two clusters, one of which only included sites inKisite Marine Park. Adult colonies were dominated by Porites massive, Acropora, Platygyra, and Favites, whereas recruits were dominated by Porites branching, Porites massive, Galaxea, and Acropora. However, correlation analysis revealed a statistically significant positive correlation (r=0.81, p<0.05) between recruit and adult coral densities across the 23 coral genera. Marereni, which had the lowest densityof recruits, has only thermallyresistant coral genera, while Kisite Marine Park, with the highest recruit densities, has over 90% thermal sensitive coral genera. A weak positive correlation was found between recruit density and coralline algae, dead standing corals, and turf algae, whereas a weak negative correlation was found between recruit density and bare substrate and macroalgae. Between management regimes, marine reserves were found to have more recruits than no-take zones (marine parks and CMAs) and open access areas, although the difference was not significant. Conclusion: There was a statistically significant difference in the density of recruits between different reef locations along the Kenyan coast. Although the dominating genera of adults and recruits were different, there was a strong positive correlation between their coral communities, which could indicate self-recruitment processes or consistent distance seedings (of the same recruit genera). Sites such as Kisite Marine Park, with high recruit densities but dominated by thermally sensitive genera, will, on the other hand, be adversely affected by future thermal stress. This could imply that reducing the threats to coral reefs such as overfishingcould allow for their natural regeneration and recovery.Keywords: coral recruits, coral adult size-class, cora demography, resilience
Procedia PDF Downloads 124158 Achieving Sustainable Agriculture with Treated Municipal Wastewater
Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi
Abstract:
Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation
Procedia PDF Downloads 321157 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 91156 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137155 Technology for Biogas Upgrading with Immobilized Algae Biomass
Authors: Marcin Debowski, Marcin Zielinski, Miroslaw Krzemieniewski, Agata Glowacka-Gil, Paulina Rusanowska, Magdalena Zielinska, Agnieszka Cydzik-Kwiatkowska
Abstract:
Technologies of biogas upgrading are now perceived as competitive solution combustion and production of electricity and heat. Biomethane production will ensure broader application as energy carrier than biogas. Biomethane can be used as fuel in internal combustion engines or introduced into the natural gas transmission network. Therefore, there is a need to search for innovative, economically and technically justified methods for biogas enrichment. The aim of this paper is to present a technology solution for biogas upgrading with immobilized algae biomass. Reactor for biogas upgrading with immobilized algae biomass can be used for removing CO₂ from the biogas, flue gases and the waste gases especially coming from different industry sectors, e.g. from the food industry from yeast production process, biogas production systems, liquid and gaseous fuels combustion systems, hydrocarbon processing technology. The basis for the technological assumptions of presented technology were laboratory works and analyses that tested technological variants of biogas upgrading. The enrichment of biogas with a methane content of 90-97% pointed to technological assumptions for installation on a technical scale. Reactor for biogas upgrading with algae biomass is characterized by a significantly lower cubature in relation to the currently used solutions which use CO₂ removal processes. The invention, by its structure, assumes achieving a very high concentration of biomass of algae through its immobilization in capsules. This eliminates the phenomenon of lowering the pH value, i.e. acidification of the environment in which algae grow, resulting from the introduction of waste gases at a high CO₂ concentration. The system for introducing light into algae capsules is characterized by a higher degree of its use, due to lower losses resulting from the phenomenon of absorption of light energy by water. The light from the light source is continuously supplied to the formed biomass of algae or cyanobacteria in capsules by the light tubes. The light source may be sunlight or a light generator of a different wavelength of light from 300 nm to 800 nm. A portion of gas containing CO₂, accumulated in the tank and conveyed by the pump is periodically introduced into the housing of the photobioreactor tank. When conveying the gas that contains CO₂, it penetrates the algal biomass in capsules through the outer envelope, displacing, from the algal biomass, gaseous metabolic products which are discharged by the outlet duct for gases. It contributes to eliminating the negative impact of this factor on CO₂ binding processes. As a result of the cyclic dosing of gases containing carbon dioxide, gaseous metabolic products of algae are displaced and removed outside the technological system. Technology for biogas upgrading with immobilized algae biomass is suitable for the small biogas plant. The advantages of this technology are high efficiency as well as useful algae biomass which can be used mainly as animal feed, fertilizers and in the power industry. The construction of the device allows effective removal of carbon dioxide from gases at a high CO₂ concentration.Keywords: biogas, carbon dioxide, immobilised biomass, microalgae, upgrading
Procedia PDF Downloads 157154 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps
Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev
Abstract:
The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.Keywords: computing experiment, hydroelasticity, physical experiment, vibration
Procedia PDF Downloads 244153 The Quantitative SWOT-Analysis of Service Blood Activity of Kazakhstan
Authors: Alua Massalimova
Abstract:
Situation analysis of Blood Service revealed that the strengths dominated over the weak 1.4 times. The possibilities dominate over the threats by 1.1 times. It follows that by using timely the possibility the Service, it is possible to strengthen its strengths and avoid threats. Priority directions of the resulting analysis are the use of subjective factors, such as personal management capacity managers of the Blood Center in the field of possibilities of legal activity of administrative decisions and the mobilization of stable staff in general market conditions. We have studied for the period 2011-2015 retrospectively indicators of Blood Service of Kazakhstan. Strengths of Blood Service of RK(Ps4,5): 1) indicators of donations for 1000 people is higher than in some countries of the CIS (in Russia 14, Kazakhstan - 17); 2) the functioning science centre of transfusiology; 3) the legal possibility of additional financing blood centers in the form of paid services; 4) the absence of competitors; 5) training on specialty Transfusiology; 6) the stable management staff of blood centers, a high level of competence; 7) increase in the incidence requiring transfusion therapy (oncohematology); 8) equipment upgrades; 9) the opening of a reference laboratory; 10) growth of the proportion of issued high-quality blood components; 11) governmental organization 'Drop of Life'; 12) the functioning bone marrow register; 13) equipped with modern equipment HLA-laboratory; 14) High categorization of average medical workers; 15) availability of own specialized scientific journal; 16) vivarium. The weaknesses (Ps = 3.5): 1) the incomplete equipping of blood centers and blood transfusion cabinets according to standards; 2) low specific weight of paid services of the CC; 3) low categorization of doctors; 4) high staff turnover; 5) the low scientific potential of industrial and clinical of transfusiology; 6) the low wages paid; 7) slight growth of harvested donor blood; 8) the weak continuity with offices blood transfusion; 9) lack of agitation work; 10) the formally functioning of Transfusion Association; 11) the absence of scientific laboratories; 12) high standard deviation from the average for donations in the republic. The possibilities (Ps = 2,7): 1): international grants; 2) organization of international seminars on clinical of transfusiology; 3) cross-sectoral cooperation; 4) to increase scientific research in the field of clinical of transfusiology; 5) reduce the share of donation unsuitable for transfusion and processing; 6) strengthening marketing management in the development of fee-based services; 7) advertising paid services; 8) strengthening the publishing of teaching aids; 9) team-building staff. The threats (Ps = 2.1): 1) an increase of staff turnover; 2) the risk of litigation; 3) reduction gemoprodukts based on evidence-based medicine; 4) regression of scientific capacity; 5) organization of marketing; 6) transfusiologist marketing; 7) reduction in the quality of the evidence base transfusions.Keywords: blood service, healthcare, Kazakhstan, quantative swot analysis
Procedia PDF Downloads 228152 Long Non-Coding RNAs Mediated Regulation of Diabetes in Humanized Mouse
Authors: Md. M. Hossain, Regan Roat, Jenica Christopherson, Colette Free, Zhiguang Guo
Abstract:
Long noncoding RNA (lncRNA) mediated post-transcriptional gene regulation, and their epigenetic landscapes have been shown to be involved in many human diseases. However, their regulation in diabetes through governing islet’s β-cell function and survival needs to be elucidated. Due to the technical and ethical constraints, it is difficult to study their role in β-cell function and survival in human under in vivo condition. In this study, humanized mice have been developed through transplanting human pancreatic islet under the kidney capsule of NOD.SCID mice and induced β-cell death leading to diabetes condition to study lncRNA mediated regulation. For this, human islets from 3 donors (3000 IEQ, purity > 80%) were transplanted under the kidney capsule of STZ induced diabetic NOD.scid mice. After at least 2 weeks of normoglycecemia, lymphocytes from diabetic NOD mice were adoptively transferred and islet grafts were collected once blood glucose reached > 200 mg/dl. RNA from human donor islets, islet grafts from humanized mice with either adoptive lymphocyte transfer (ALT) or PBS control (CTL) were ribodepleted; barcoded fragment libraries were constructed and sequenced on the Ion Proton sequencer. lncRNA expression in isolated human islets, islet grafts from humanized mice with and without induced β-cell death and their regulation in human islets function in vitro under glucose challenge, cytokine mediated inflammation and induced apoptotic condition were investigated. Out of 3155 detected lncRNAs, 299 that highly expressed in islets were found to be significantly downregulated and 224 upregulated in ALT compared to CTL. Most of these are found to be collocated within 5 kb upstream and 1 kb downstream of 788 up- and 624 down-regulated mRNAs. Genomic Regions Enrichment of Annotations Analysis revealed deregulated and collocated genes are related to pancreas endocrine development; insulin synthesis, processing, and secretion; pancreatitis and diabetes. Many of them, that found to be located within enhancer domains for islet specific gene activity, are associated to the deregulation of known islet/βcell specific transcription factors and genes that are important for β-cell differentiation, identity, and function. RNA sequencing analysis revealed aberrant lncRNA expression which is associated to the deregulated mRNAs in β-cell function as well as in molecular pathways related to diabetes. A distinct set of candidate lncRNA isoforms were identified as highly enriched and specific to human islets, which are deregulated in human islets from donors with different BMIs and with type 2 diabetes. These RNAs show an interesting regulation in cultured human islets under glucose stimulation and with induced β-cell death by cytokines. Aberrant expression of these lncRNAs was detected in the exosomes from the media of islets cultured with cytokines. Results of this study suggest that the islet specific lncRNAs are deregulated in human islet with β-cell death, hence important in diabetes. These lncRNAs might be important for human β-cell function and survival thus could be used as biomarkers and novel therapeutic targets for diabetes.Keywords: β-cell, humanized mouse, pancreatic islet, LncRNAs
Procedia PDF Downloads 164151 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges
Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström
Abstract:
The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.Keywords: operator, process control, energy system, sustainability, future control room, skill
Procedia PDF Downloads 95150 Characterization of Surface Microstructures on Bio-Based PLA Fabricated with Nano-Imprint Lithography
Authors: D. Bikiaris, M. Nerantzaki, I. Koliakou, A. Francone, N. Kehagias
Abstract:
In the present study, the formation of structures in poly(lactic acid) (PLA) has been investigated with respect to producing areas of regular, superficial features with dimensions comparable to those of cells or biological macromolecules. Nanoimprint lithography, a method of pattern replication in polymers, has been used for the production of features ranging from tens of micrometers, covering areas up to 1 cm², down to hundreds of nanometers. Both micro- and nano-structures were faithfully replicated. Potentially, PLA has wide uses within biomedical fields, from implantable medical devices, including screws and pins, to membrane applications, such as wound covers, and even as an injectable polymer for, for example, lipoatrophy. The possibility of fabricating structured PLA surfaces, with structures of the dimensions associated with cells or biological macro- molecules, is of interest in fields such as cellular engineering. Imprint-based technologies have demonstrated the ability to selectively imprint polymer films over large areas resulting in 3D imprints over flat, curved or pre-patterned surfaces. Here, we compare nano-patterned with nano-patterned by nanoimprint lithography (NIL) PLA film. A silicon nanostructured stamp (provided by Nanotypos company) having positive and negative protrusions was used to pattern PLA films by means of thermal NIL. The polymer film was heated from 40°C to 60°C above its Tg and embossed with a pressure of 60 bars for 3 min. The stamp and substrate were demolded at room temperature. Scanning electron microscope (SEM) images showed good replication fidelity of the replicated Si stamp. Contact-angle measurements suggested that positive microstructuring of the polymer (where features protrude from the polymer surface) produced a more hydrophilic surface than negative micro-structuring. The ability to structure the surface of the poly(lactic acid), allied to the polymer’s post-processing transparency and proven biocompatibility. Films produced in this were also shown to enhance the aligned attachment behavior and proliferation of Wharton’s Jelly Mesenchymal Stem cells, leading to the observed growth contact guidance. The bacterial attachment patterns of some bacteria, highlighted that the nano-patterned PLA structure can reduce the propensity for the bacteria to attach to the surface, with a greater bactericidal being demonstrated activity against the Staphylococcus aureus cells. These biocompatible, micro- and nanopatterned PLA surfaces could be useful for polymer– cell interaction experiments at dimensions at, or below, that of individual cells. Indeed, post-fabrication modification of the microstructured PLA surface, with materials such as collagen (which can further reduce the hydrophobicity of the surface), will extend the range of applications, possibly through the use of PLA’s inherent biodegradability. Further study is being undertaken to examine whether these structures promote cell growth on the polymer surface.Keywords: poly(lactic acid), nano-imprint lithography, anti-bacterial properties, PLA
Procedia PDF Downloads 330149 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 181