Search results for: mechatronic engineering
82 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 27481 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials
Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina
Abstract:
The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials
Procedia PDF Downloads 31580 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 16879 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process
Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski
Abstract:
As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals
Procedia PDF Downloads 36778 The Influence of Human Movement on the Formation of Adaptive Architecture
Authors: Rania Raouf Sedky
Abstract:
Adaptive architecture relates to buildings specifically designed to adapt to their residents and their environments. To design a biologically adaptive system, we can observe how living creatures in nature constantly adapt to different external and internal stimuli to be a great inspiration. The issue is not just how to create a system that is capable of change but also how to find the quality of change and determine the incentive to adapt. The research examines the possibilities of transforming spaces using the human body as an active tool. The research also aims to design and build an effective dynamic structural system that can be applied on an architectural scale and integrate them all into the creation of a new adaptive system that allows us to conceive a new way to design, build and experience architecture in a dynamic manner. The main objective was to address the possibility of a reciprocal transformation between the user and the architectural element so that the architecture can adapt to the user, as the user adapts to architecture. The motivation is the desire to deal with the psychological benefits of an environment that can respond and thus empathize with human emotions through its ability to adapt to the user. Adaptive affiliations of kinematic structures have been discussed in architectural research for more than a decade, and these issues have proven their effectiveness in developing kinematic structures, responsive and adaptive, and their contribution to 'smart architecture'. A wide range of strategies have been used in building complex kinetic and robotic systems mechanisms to achieve convertibility and adaptability in engineering and architecture. One of the main contributions of this research is to explore how the physical environment can change its shape to accommodate different spatial displays based on the movement of the user’s body. The main focus is on the relationship between materials, shape, and interactive control systems. The intention is to develop a scenario where the user can move, and the structure interacts without any physical contact. The soft form of shifting language and interaction control technology will provide new possibilities for enriching human-environmental interactions. How can we imagine a space in which to construct and understand its users through physical gestures, visual expressions, and response accordingly? How can we imagine a space whose interaction depends not only on preprogrammed operations but on real-time feedback from its users? The research also raises some important questions for the future. What would be the appropriate structure to show physical interaction with the dynamic world? This study concludes with a strong belief in the future of responsive motor structures. We imagine that they are developing the current structure and that they will radically change the way spaces are tested. These structures have obvious advantages in terms of energy performance and the ability to adapt to the needs of users. The research highlights the interface between remote sensing and a responsive environment to explore the possibility of an interactive architecture that adapts to and responds to user movements. This study ends with a strong belief in the future of responsive motor structures. We envision that it will improve the current structure and that it will bring a fundamental change to the way in which spaces are tested.Keywords: adaptive architecture, interactive architecture, responsive architecture, tensegrity
Procedia PDF Downloads 15677 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 19576 Innovation in PhD Training in the Interdisciplinary Research Institute
Authors: B. Shaw, K. Doherty
Abstract:
The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.Keywords: interdisciplinary, method, research student, training
Procedia PDF Downloads 20675 Investigation of Resilient Circles in Local Community and Industry: Waju-Traditional Culture in Japan and Modern Technology Application
Authors: R. Ueda
Abstract:
Today global society is seeking resilient partnership in local organizations and individuals, which realizes multi-stakeholders relationship. Although it is proposed by modern global framework of sustainable development, it is conceivable that such affiliation can be found out in the traditional local community in Japan, and that traditional spirit is tacitly sustaining in modern context of disaster mitigation in society and economy. Then this research is aiming to clarify and analyze implication for the global world by actual case studies. Regional and urban resilience is the ability of multi-stakeholders to cooperate flexibly and to adapt in response to changes in the circumstances caused by disasters, but there are various conflicts affecting coordination of disaster relief measures. These conflicts arise not only from a lack of communication and an insufficient network, but also from the difficulty to jointly draw common context from fragmented information. This is because of the weakness of our modern engineering which focuses on maintenance and restoration of individual systems. Here local ‘circles’ holistically includes local community and interacts periodically. Focusing on examples of resilient organizations and wisdom created in communities, what can be seen throughout history is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. And the wisdom of a solid and autonomous disaster prevention formed by the historical community called’ Waju’ – an area surrounded by circle embankment to protect the settlement from flood – lives on in government efforts of the coastal industrial island of today. Industrial company there collaborates to create a circle including common evacuation space, road access improvement and infrastructure recovery. These days, people here adopts new interface technology. Large-scale AR- Augmented Reality for more than hundred people is expressing detailed hazard by tsunami and liquefaction. Common experiences of the major disaster space and circle of mutual discussion are enforcing resilience. Collaboration spirit lies in the center of circle. A consistent key point is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. This writer believes that both self-governing human organizations and the societal implementation of technical systems are necessary. Infrastructure should be autonomously instituted by associations of companies and other entities in industrial areas for working closely with local governments. To develop advanced disaster prevention and multi-stakeholder collaboration, partnerships among industry, government, academia and citizens are important.Keywords: industrial recovery, multi-sakeholders, traditional culture, user experience, Waju
Procedia PDF Downloads 11374 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 47373 Rainwater Management: A Case Study of Residential Reconstruction of Cultural Heritage Buildings in Russia
Authors: V. Vsevolozhskaia
Abstract:
Since 1990, energy-efficient development concepts have constituted both a turning point in civil engineering and a challenge for an environmentally friendly future. Energy and water currently play an essential role in the sustainable economic growth of the world in general and Russia in particular: the efficiency of the water supply system is the second most important parameter for energy consumption according to the British assessment method, while the water-energy nexus has been identified as a focus for accelerating sustainable growth and developing effective, innovative solutions. The activities considered in this study were aimed at organizing and executing the renovation of the property in residential buildings located in St. Petersburg, specifically buildings with local or federal historical heritage status under the control of the St. Petersburg Committee for the State Inspection and Protection of Historic and Cultural Monuments (KGIOP) and UNESCO. Even after reconstruction, these buildings still fall into energy efficiency class D. Russian Government Resolution No. 87 on the structure and required content of project documentation contains a section entitled ‘Measures to ensure compliance with energy efficiency and equipment requirements for buildings, structures, and constructions with energy metering devices’. Mention is made of the need to install collectors and meters, which only calculate energy, neglecting the main purpose: to make buildings more energy-efficient, potentially even energy efficiency class A. The least-explored aspects of energy-efficient technology in the Russian Federation remain the water balance and the possibility of implementing rain and meltwater collection systems. These modern technologies are used exclusively for new buildings due to a lack of government directive to create project documentation during the planning of major renovations and reconstruction that would include the collection and reuse of rainwater. Energy-efficient technology for rain and meltwater collection is currently applied only to new buildings, even though research has proved that using rainwater is safe and offers a huge step forward in terms of eco-efficiency analysis and water innovation. Where conservation is mandatory, making changes to protected sites is prohibited. In most cases, the protected site is the cultural heritage building itself, including the main walls and roof. However, the installation of a second water supply system and collection of rainwater would not affect the protected building itself. Water efficiency in St. Petersburg is currently considered only from the point of view of the installation that regulates the flow of the pipeline shutoff valves. The development of technical guidelines for the use of grey- and/or rainwater to meet the needs of residential buildings during reconstruction or renovation is not yet complete. The ideas for water treatment, collection and distribution systems presented in this study should be taken into consideration during the reconstruction or renovation of residential cultural heritage buildings under the protection of KGIOP and UNESCO. The methodology applied also has the potential to be extended to other cultural heritage sites in northern countries and lands with an average annual rainfall of over 600 mm to cover average toilet-flush needs.Keywords: cultural heritage, energy efficiency, renovation, rainwater collection, reconstruction, water management, water supply
Procedia PDF Downloads 9272 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine
Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski
Abstract:
The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization
Procedia PDF Downloads 38671 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience
Authors: Amanda Kavner, Richard Lamb
Abstract:
Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience
Procedia PDF Downloads 11970 Increased Stability of Rubber-Modified Asphalt Mixtures to Swelling, Expansion and Rebound Effect during Post-Compaction
Authors: Fernando Martinez Soto, Gaetano Di Mino
Abstract:
The application of rubber into bituminous mixtures requires attention and care during mixing and compaction. Rubber modifies the properties because it reacts in the internal structure of bitumen at high temperatures changing the performance of the mixture (interaction process of solvents with binder-rubber aggregate). The main change is the increasing of the viscosity and elasticity of the binder due to the larger sizes of the rubber particles by dry process but, this positive effect is counteracted by short mixing times, compared to wet technology, and due to the transport processes, curing time and post-compaction of the mixtures. Therefore, negative effects as swelling of rubber particles, rebounding effect of the specimens and thermal changes by different expansion of the structure inside the mixtures, can change the mechanical properties of the rubberized blends. Based on the dry technology, different asphalt-rubber binders using devulcanized or natural rubber (truck and bus tread rubber), have served to demonstrate these effects and how to solve them into two dense-gap graded rubber modified asphalt concrete mixes (RUMAC) to enhance the stability, workability and durability of the compacted samples by Superpave gyratory compactor method. This paper specifies the procedures developed in the Department of Civil Engineering of the University of Palermo during September 2016 to March 2017, for characterizing the post-compaction and mix-stability of the one conventional mixture (hot mix asphalt without rubber) and two gap-graded rubberized asphalt mixes according granulometry for rail sub-ballast layers with nominal size of Ø22.4mm of aggregates according European standard. Thus, the main purpose of this laboratory research is the application of ambient ground rubber from scrap tires processed at conventional temperature (20ºC) inside hot bituminous mixtures (160-220ºC) as a substitute for 1.5%, 2% and 3% by weight of the total aggregates (3.2%, 4.2% and, 6.2% respectively by volumetric part of the limestone aggregates of bulk density equal to 2.81g/cm³) considered, not as a part of the asphalt binder. The reference bituminous mixture was designed with 4% of binder and ± 3% of air voids, manufactured for a conventional bitumen B50/70 at 160ºC-145ºC mix-compaction temperatures to guarantee the workability of the mixes. The proportions of rubber proposed are #60-40% for mixtures with 1.5 to 2% of rubber and, #20-80% for mixture with 3% of rubber (as example, a 60% of Ø0.4-2mm and 40% of Ø2-4mm). The temperature of the asphalt cement is between 160-180 ºC for mixing and 145-160 ºC for compaction, according to the optimal values for viscosity using Brookfield viscometer and 'ring and ball' - penetration tests. These crumb rubber particles act as a rubber-aggregate into the mixture, varying sizes between 0.4mm to 2mm in a first fraction, and 2-4mm as second proportion. Ambient ground rubber with a specific gravity of 1.154g/cm³ is used. The rubber is free of loose fabric, wire, and other contaminants. It was found optimal results in real beams and cylindrical specimens with each HMA mixture reducing the swelling effect. Different factors as temperature, particle sizes of rubber, number of cycles and pressures of compaction that affect the interaction process are explained.Keywords: crumb-rubber, gyratory compactor, rebounding effect, superpave mix-design, swelling, sub-ballast railway
Procedia PDF Downloads 24369 Designing Next Generation Platforms for Recombinant Protein Production by Genome Engineering of Escherichia coli
Authors: Priyanka Jain, Ashish K. Sharma, Esha Shukla, K. J. Mukherjee
Abstract:
We propose a paradigm shift in our approach to design improved platforms for recombinant protein production, by addressing system level issues rather than the individual steps associated with recombinant protein synthesis like transcription, translation, etc. We demonstrate that by controlling and modulating the cellular stress response (CSR), which is responsible for feedback control of protein synthesis, we can generate hyper-producing strains. We did transcriptomic profiling of post-induction cultures, expressing different types of protein, to analyze the nature of this cellular stress response. We found significant down-regulation of substrate utilization, translation, and energy metabolism genes due to generation CSR inside the host cell. However, transcription profiling has also shown that many genes are up-regulated post induction and their role in modulating the CSR is unclear. We hypothesized that these up-regulated genes trigger signaling pathways, generating the CSR and concomitantly reduce the recombinant protein yield. To test this hypothesis, we knocked out the up-regulated genes, which did not have any downstream regulatees, and analyzed their impact on cellular health and recombinant protein expression. Two model proteins i.e., GFP and L-Asparaginase were chosen for this analysis. We observed a significant improvement in expression levels, with some knock-outs showing more than 7-fold higher expression compared to control. The 10 best single knock-outs were chosen to make 45 combinations of all possible double knock-outs. A further increase in expression was observed in some of these double knock- outs with GFP levels being highest in a double knock-out ΔyhbC + ΔelaA. However, for L-Asparaginase which is a secretory protein, the best results were obtained using a combination of ΔelaA+ΔcysW knock-outs. We then tested all the knock outs for their ability to enhance the expression of a 'difficult-to-express' protein. The Rubella virus E1 protein was chosen and tagged with sfGFP at the C-terminal using a linker peptide for easy online monitoring of expression of this fusion protein. Interestingly, the highest increase in Rubella-sGFP levels was obtained in the same double knock-out ΔelaA + ΔcysW (5.6 fold increase in expression yield compared to the control) which gave the highest expression for L-Asparaginase. However, for sfGFP alone, the ΔyhbC+ΔmarR knock-out gave the highest level of expression. These results indicate that there is a fair degree of commonality in the nature of the CSR generated by the induction of different proteins. Transcriptomic profiling of the double knock out showed that many genes associated with the translational machinery and energy biosynthesis did not get down-regulated post induction, unlike the control where these genes were significantly down-regulated. This confirmed our hypothesis of these genes playing an important role in the generation of the CSR and allowed us to design a strategy for making better expression hosts by simply knocking out key genes. This strategy is radically superior to the previous approach of individually up-regulating critical genes since it blocks the mounting of the CSR thus preventing the down-regulation of a very large number of genes responsible for sustaining the flux through the recombinant protein production pathway.Keywords: cellular stress response, GFP, knock-outs, up-regulated genes
Procedia PDF Downloads 22768 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 13967 University Climate and Psychological Adjustment: African American Women’s Experiences at Predominantly White Institutions in the United States
Authors: Faheemah N. Mustafaa, Tamarie Macon, Tabbye Chavous
Abstract:
A major concern of university leaders worldwide is how to create environments where students from diverse racial/ethnic, national, and cultural backgrounds can thrive. Over the past decade or so in the United States, African American women have done exceedingly well in terms of college enrollment, academic performance, and completion. However, the relative academic successes of African American women in higher education has in some ways overshadowed social challenges many Black women continue to encounter on college campuses in the United States. Within predominantly White institutions (PWIs) in particular, there is consistent evidence that many Black students experience racially hostile climates. However, research studies on racial climates within PWIs have mostly focused on cross-sectional comparisons of minority and majority group experiences, and few studies have examined campus racial climate in relation to short- and longer-term well-being. One longitudinal study reported that African American women’s psychological well-being was positively related to their comfort in cross-racial interactions (a concept closely related to campus climate). Thus, our primary research question was: Do African American women’s perceptions of campus climate (tension and positive association) during their freshman year predict their reports of psychological distress and well-being (self-acceptance) during their sophomore year? Participants were part of a longitudinal survey examining African American college students’ academic identity development, particularly in Science, Technology, Engineering, and Mathematics (STEM) fields. The final subsample included 134 self-identified African American/Black women enrolled in PWIs. Accounting for background characteristics (mother’s education, family income, interracial contact, and prior levels of outcomes), we employed hierarchical regression to examine relationships between campus racial climate during freshman year and psychological adjustment one year later. Both regression models significantly predicted African American women’s psychological outcomes (for distress, F(7,91)= 4.34, p < .001; and for self-acceptance, F(7,90)= 4.92, p < .001). Although none of the controls were significant predictors, perceptions of racial tension on campus were associated with both distress and self-acceptance. More perceptions of tension were related to African American women’s greater psychological distress the following year (B= 0.22, p= .01). Additionally, racial tension predicted later self-acceptance in the expected direction: Higher first-year reports of racial tension were related to less positive attitudes toward the self during the sophomore year (B= -0.16, p= .04). However, perceptions that it was normative for Black and White students to socialize on campus (or positive association scores) were unrelated to psychological distress or self-acceptance. Findings highlight the relevance of examining multiple facets of campus racial climate in relation to psychological adjustment, with possible emphasis on the import of racial tension on African American women’s psychological adjustment. Results suggest that negative dimensions of campus racial climate may have lingering effects on psychological well-being, over and above more positive aspects of climate. Thus, programs targeted toward improving student relations on campus should consider addressing cross-racial tensions.Keywords: higher education, psychological adjustment, university climate, university students
Procedia PDF Downloads 38566 Fibroblast Compatibility of Core-Shell Coaxially Electrospun Hybrid Poly(ε-Caprolactone)/Chitosan Scaffolds
Authors: Hilal Turkoglu Sasmazel, Ozan Ozkan, Seda Surucu
Abstract:
Tissue engineering is the field of treating defects caused by injuries, trauma or acute/chronic diseases by using artificial scaffolds that mimic the extracellular matrix (ECM), the natural biological support for the tissues and cells within the body. The main aspects of a successful artificial scaffold are (i) large surface area in order to provide multiple anchorage points for cells to attach, (ii) suitable porosity in order to achieve 3 dimensional growth of the cells within the scaffold as well as proper transport of nutrition, biosignals and waste and (iii) physical, chemical and biological compatibility of the material in order to obtain viability throughout the healing process. By hybrid scaffolds where two or more different materials were combined with advanced fabrication techniques into complex structures, it is possible to combine the advantages of individual materials into one single structure while eliminating the disadvantages of each. Adding this to the complex structure provided by advanced fabrication techniques enables obtaining the desired aspects of a successful artificial tissue scaffold. In this study, fibroblast compatibility of poly(ε-caprolactone) (PCL)/chitosan core-shell electrospun hybrid scaffolds with proper mechanical, chemical and physical properties successfully developed in our previous study was investigated. Standard 7-day cell culture was carried out with L929 fibroblast cell line. The viability of the cells cultured with the scaffolds was monitored with 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) viability assay for every 48 h starting with 24 h after the initial seeding. In this assay, blank commercial tissue culture polystyrene (TCPS) Petri dishes, single electrospun PCL and single electrospun chitosan mats were used as control in order to compare and contrast the performance of the hybrid scaffolds. The adhesion, proliferation, spread and growth of the cells on/within the scaffolds were observed visually on the 3rd and the 7th days of the culture period with confocal laser scanning microscopy (CSLM) and scanning electron microscopy (SEM). The viability assay showed that the hybrid scaffolds caused no toxicity for fibroblast cells and provided a steady increase in cell viability, effectively doubling the cell density for every 48 h for the course of 7 days, as compared to TCPS, single electrospun PCL or chitosan mats. The cell viability on the hybrid scaffold was ~2 fold better compared to TCPS because of its 3D ECM-like structure compared to 2D flat surface of commercially cell compatible TCPS, and the performance was ~2 fold and ~10 fold better compared to single PCL and single chitosan mats, respectively, even though both fabricated similarly with electrospinning as non-woven fibrous structures, because single PCL and chitosan mats were either too hydrophobic or too hydrophilic to maintain cell attachment points. The viability results were verified with visual images obtained with CSLM and SEM, in which cells found to achieve characteristic spindle-like fibroblast shape and spread on the surface as well within the pores successfully at high densities.Keywords: chitosan, core-shell, fibroblast, electrospinning, PCL
Procedia PDF Downloads 17665 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest
Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu
Abstract:
Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.Keywords: ARD, CEW, police, TASER
Procedia PDF Downloads 34664 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals
Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova
Abstract:
Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk
Procedia PDF Downloads 24063 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 18162 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals
Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen
Abstract:
Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.Keywords: care, learning, nursing, technology
Procedia PDF Downloads 13661 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations
Authors: Oleg Kabantsev, Karomatullo Umarov
Abstract:
The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis
Procedia PDF Downloads 20560 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles
Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani
Abstract:
Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds
Procedia PDF Downloads 16659 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38658 Regional Hydrological Extremes Frequency Analysis Based on Statistical and Hydrological Models
Authors: Hadush Kidane Meresa
Abstract:
The hydrological extremes frequency analysis is the foundation for the hydraulic engineering design, flood protection, drought management and water resources management and planning to utilize the available water resource to meet the desired objectives of different organizations and sectors in a country. This spatial variation of the statistical characteristics of the extreme flood and drought events are key practice for regional flood and drought analysis and mitigation management. For different hydro-climate of the regions, where the data set is short, scarcity, poor quality and insufficient, the regionalization methods are applied to transfer at-site data to a region. This study aims in regional high and low flow frequency analysis for Poland River Basins. Due to high frequent occurring of hydrological extremes in the region and rapid water resources development in this basin have caused serious concerns over the flood and drought magnitude and frequencies of the river in Poland. The magnitude and frequency result of high and low flows in the basin is needed for flood and drought planning, management and protection at present and future. Hydrological homogeneous high and low flow regions are formed by the cluster analysis of site characteristics, using the hierarchical and C- mean clustering and PCA method. Statistical tests for regional homogeneity are utilized, by Discordancy and Heterogeneity measure tests. In compliance with results of the tests, the region river basin has been divided into ten homogeneous regions. In this study, frequency analysis of high and low flows using AM for high flow and 7-day minimum low flow series is conducted using six statistical distributions. The use of L-moment and LL-moment method showed a homogeneous region over entire province with Generalized logistic (GLOG), Generalized extreme value (GEV), Pearson type III (P-III), Generalized Pareto (GPAR), Weibull (WEI) and Power (PR) distributions as the regional drought and flood frequency distributions. The 95% percentile and Flow duration curves of 1, 7, 10, 30 days have been plotted for 10 stations. However, the cluster analysis performed two regions in west and east of the province where L-moment and LL-moment method demonstrated the homogeneity of the regions and GLOG and Pearson Type III (PIII) distributions as regional frequency distributions for each region, respectively. The spatial variation and regional frequency distribution of flood and drought characteristics for 10 best catchment from the whole region was selected and beside the main variable (streamflow: high and low) we used variables which are more related to physiographic and drainage characteristics for identify and delineate homogeneous pools and to derive best regression models for ungauged sites. Those are mean annual rainfall, seasonal flow, average slope, NDVI, aspect, flow length, flow direction, maximum soil moisture, elevation, and drainage order. The regional high-flow or low-flow relationship among one streamflow characteristics with (AM or 7-day mean annual low flows) some basin characteristics is developed using Generalized Linear Mixed Model (GLMM) and Generalized Least Square (GLS) regression model, providing a simple and effective method for estimation of flood and drought of desired return periods for ungauged catchments.Keywords: flood , drought, frequency, magnitude, regionalization, stochastic, ungauged, Poland
Procedia PDF Downloads 60157 Challenges in Employment and Adjustment of Academic Expatriates Based in Higher Education Institutions in the KwaZulu-Natal Province, South Africa
Authors: Thulile Ndou
Abstract:
The purpose of this study was to examine the challenges encountered in the mediation of attracting and recruiting academic expatriates who in turn encounter their own obstacles in adjusting into and settling in their host country, host academic institutions and host communities. The none-existence of literature on attraction, placement and management of academic expatriates in the South African context has been acknowledged. Moreover, Higher Education Institutions in South Africa have voiced concerns relating to delayed and prolonged recruitment and selection processes experienced in the employment process of academic expatriates. Once employed, academic expatriates should be supported and acquainted with the surroundings, the local communities as well as be assisted to establish working relations with colleagues in order to facilitate their adjustment and integration process. Hence, an employer should play a critical role in facilitating the adjustment of academic expatriates. This mixed methods study was located in four Higher Education Institutions based in the KwaZulu-Natal province, in South Africa. The explanatory sequential design approach was deployed in the study. The merits of this approach were chiefly that it employed both the quantitative and qualitative techniques of inquiry. Therefore, the study examined and interrogated its subject from a multiplicity of quantitative and qualitative vantage points, yielding a much more enriched and enriching illumination. Mixing the strengths of both the quantitative and the qualitative techniques delivered much more durable articulation and understanding of the subject. A 5-point Likert scale questionnaire was used to collect quantitative data relating to interaction adjustment, general adjustment and work adjustment from academic expatriates. One hundred and forty two (142) academic expatriates participated in the quantitative study. Qualitative data relating to employment process and support offered to academic expatriates was collected through a structured questionnaire and semi-structured interviews. A total of 48 respondents; including, line managers, human resources practitioners, and academic expatriates participated in the qualitative study. The Independent T-test, ANOVA and Descriptive Statistics were performed to analyse, interpret and make meaning of quantitative data and thematic analysis was used to analyse qualitative data. The qualitative results revealed that academic talent is sourced from outside the borders of the country because of the academic skills shortage in almost all academic disciplines especially in the disciplines associated with Science, Engineering and Accounting. However, delays in work permit application process made it difficult to finalise the recruitment and selection process on time. Furthermore, the quantitative results revealed that academic expatriates experience general and interaction adjustment challenges associated with the use of local language and understanding of local culture. However, female academic expatriates were found to be better adjusted in the two areas as compared to male academic expatriates. Moreover, significant mean differences were found between institutions suggesting that academic expatriates based in rural areas experienced adjustment challenges differently from the academic expatriates based in urban areas. The study gestured to the need for policy revisions in the area of immigration, human resources and academic administration.Keywords: academic expatriates, recruitment and selection, interaction and general adjustment, work adjustment
Procedia PDF Downloads 30656 Active Learning Methods in Mathematics
Authors: Daniela Velichová
Abstract:
Plenty of ideas on how to adopt active learning methods in education are available nowadays. Mathematics is a subject where the active involvement of students is required in particular in order to achieve desirable results regarding sustainable knowledge and deep understanding. The present article is based on the outcomes of an Erasmus+ project DrIVE-MATH, that was aimed at developing a novel and integrated framework to teach maths classes in engineering courses at the university level. It is fundamental for students from the early years of their academic life to have agile minds. They must be prepared to adapt to their future working environments, where enterprises’ views are always evolving, where all collaborate in teams, and relations between peers are thought for the well-being of the whole - workers and company profit. This reality imposes new requirements on higher education in terms of adaptation of different pedagogical methods, such as project-based and active-learning methods used within the course curricula. Active learning methodologies are regarded as an effective way to prepare students to meet the challenges posed by enterprises and to help them in building critical thinking, analytic reasoning, and insight to the solved complex problems from different perspectives. Fostering learning-by-doing activities in the pedagogical process can help students to achieve learning independence, as they could acquire deeper conceptual understanding by experimenting with the abstract concept in a more interesting, useful, and meaningful way. Clear information about learning outcomes and goals might help students to take more responsibility for their learning results. Active learning methods implemented by the project team members in their teaching practice, eduScrum and Jigsaw in particular, proved to provide better scientific and soft skills support to students than classical teaching methods. EduScrum method enables teachers to generate a working environment that stimulates students' working habits and self-initiative as they become aware of their responsibilities within the team, their own acquired knowledge, and their abilities to solve problems independently, though in collaboration with other team members. This method enhances collaborative learning, as students are working in teams towards a common goal - knowledge acquisition, while they are interacting with each other and evaluated individually. Teams consisting of 4-5 students work together on a list of problems - sprint; each member is responsible for solving one of them, while the group leader – a master, is responsible for the whole team. A similar principle is behind the Jigsaw technique, where the classroom activity makes students dependent on each other to succeed. Students are divided into groups, and assignments are split into pieces, which need to be assembled by the whole group to complete the (Jigsaw) puzzle. In this paper, analysis of students’ perceptions concerning the achievement of deeper conceptual understanding in mathematics and the development of soft skills, such as self-motivation, critical thinking, flexibility, leadership, responsibility, teamwork, negotiation, and conflict management, is presented. Some new challenges are discussed as brought by introducing active learning methods in the basic mathematics courses. A few examples of sprints developed and used in teaching basic maths courses at technical universities are presented in addition.Keywords: active learning methods, collaborative learning, conceptual understanding, eduScrum, Jigsaw, soft skills
Procedia PDF Downloads 5455 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 14754 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 49853 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 161