Search results for: selection standards
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4098

Search results for: selection standards

258 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 324
257 Higher Education in India Strength, Weakness, Opportunities and Threats

Authors: Renu Satish Nair

Abstract:

Indian higher education system is the third largest in the world next to United States and China. India is experiencing a rapid growth in higher education in terms of student enrollment as well as establishment of new universities, colleges and institutes of national importance. Presently about 22 million students are being enrolled in higher education and more than 46 thousand institutions’ are functioning as centers of higher education. Indian government plays a 'command and control' role in higher education. The main governing body is University Grants Commission, which enforces its standards, advises the government, and helps coordinate between the centre and the state. Accreditation of higher learning is over seen by 12 autonomous institutions established by the University Grants Commission. The present paper is an effort to analyze the strength, weakness, opportunities and threat (SWOT Analysis) of Indian Higher education system. The higher education in India is progressing ahead by virtue of its strength which is being recognized at global level. Several institutions of India, such as Indian Institutes of Technology (IITs), Indian Institutes of Management (IIMs) and National Institutes of Technology (NITs) have been globally acclaimed for their standard of education. Three Indian universities were listed in the Times Higher Education list of the world’s top 200 universities i.e. Indian Institutes of Technology, Indian Institute of Management and Jawahar Lal Nehru University in 2005 and 2006. Six Indian Institutes of Technology and the Birla Institute of Technology and Science - Pilani were listed among the top 20 science and technology schools in Asia by the Asia Week. The school of Business situated in Hyderabad was ranked number 12 in Globe MBA ranking by the Financial Times of London in 2010 while the All India Institute of Medical Sciences has been recognized as a global leader in medical research and treatment. But at the same time, because of vast expansion, the system bears several weaknesses. The Indian higher education system in many parts of the country is in the state of disrepair. In almost half the districts in the country higher education enrollment are very low. Almost two third of total universities and 90% of colleges are rated below average on quality parameters. This can be attributed to the under prepared faculty, unwieldy governance and other obstacles to innovation and improvement that could prohibit India from meeting its national education goals. The opportunities in Indian higher education system are widely ranged. The national institutions are training their products to compete at global level and make them capable to grab opportunities worldwide. The state universities and colleges with their limited resources are giving the products that are capable enough to secure career opportunities and hold responsible positions in various government and private sectors with in the country. This is further creating opportunities for the weaker section of the society to join the main stream. There are several factors which can be defined as threats to Indian higher education system. It is a matter of great concern and needs proper attention. Some important factors are -Conservative society, particularly for women education; -Lack of transparency, -Taking higher education as a means of business

Keywords: Indian higher education system, SWOT analysis, university grants commission, Indian institutes of technology

Procedia PDF Downloads 861
256 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 207
255 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management

Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin

Abstract:

The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.

Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus

Procedia PDF Downloads 91
254 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 30
253 The Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) Process: An Audit of Its Utilisation on a UK Tertiary Specialist Intensive Care Unit

Authors: Gokulan Vethanayakam, Daniel Aston

Abstract:

Introduction: The ReSPECT process supports healthcare professionals when making patient-centered decisions in the event of an emergency. It has been widely adopted by the NHS in England and allows patients to express thoughts and wishes about treatments and outcomes that they consider acceptable. It includes (but is not limited to) cardiopulmonary resuscitation decisions. ReSPECT conversations should ideally occur prior to ICU admission and should be documented in the eight sections of the nationally-standardised ReSPECT form. This audit evaluated the use of ReSPECT on a busy cardiothoracic ICU in an NHS Trust where established policies advocating its use exist. Methods: This audit was a retrospective review of ReSPECT forms for a sample of high-risk patients admitted to ICU at the Royal Papworth Hospital between January 2021 and March 2022. Patients all received one of the following interventions: Veno-Venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) for severe respiratory failure (retrieved via the national ECMO service); cardiac or pulmonary transplantation-related surgical procedures (including organ transplants and Ventricular Assist Device (VAD) implantation); or elective non-transplant cardiac surgery. The quality of documentation on ReSPECT forms was evaluated using national standards and a graded ranking tool devised by the authors which was used to assess narrative aspects of the forms. Quality was ranked as A (excellent) to D (poor). Results: Of 230 patients (74 VV-ECMO, 104 transplant, 52 elective non-transplant surgery), 43 (18.7%) had a ReSPECT form and only one (0.43%) patient had a ReSPECT form completed prior to ICU admission. Of the 43 forms completed, 38 (88.4%) were completed due to the commencement of End of Life (EoL) care. No non-transplant surgical patients included in the audit had a ReSPECT form. There was documentation of balance of care (section 4a), CPR status (section 4c), capacity assessment (section 5), and patient involvement in completing the form (section 6a) on all 43 forms. Of the 34 patients assessed as lacking capacity to make decisions, only 22 (64.7%) had reasons documented. Other sections were variably completed; 29 (67.4%) forms had relevant background information included to a good standard (section 2a). Clinical guidance for the patient (section 4b) was given in 25 (58.1%), of which 11 stated the rationale that underpinned it. Seven forms (16.3%) contained information in an inappropriate section. In a comparison of ReSPECT forms completed ahead of an EoL trigger with those completed when EoL care began, there was a higher number of entries in section 3 (considering patient’s values/fears) that were assessed at grades A-B in the former group (p = 0.014), suggesting higher quality. Similarly, forms from the transplant group contained higher quality information in section 3 than those from the VV-ECMO group (p = 0.0005). Conclusions: Utilisation of the ReSPECT process in high-risk patients is yet to be well-adopted in this trust. Teams who meet patients before hospital admission for transplant or high-risk surgery should be encouraged to engage with the ReSPECT process at this point in the patient's journey. VV-ECMO retrieval teams should consider ReSPECT conversations with patients’ relatives at the time of retrieval.

Keywords: audit, critical care, end of life, ICU, ReSPECT, resuscitation

Procedia PDF Downloads 52
252 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 146
251 Language Education Policy in Arab Schools in Israel

Authors: Fatin Mansour Daas

Abstract:

Language education responds to and is reflective of emerging social and political trends. Language policies and practices are shaped by political, economic, social and cultural considerations. Following this, Israeli language education policy as implemented in Arab schools in Israel is influenced by the particular political and social situation of Arab-Palestinian citizens of Israel. This national group remained in their homeland following the war in 1948 between Israel and its Arab neighbors and became Israeli citizens following the establishment of the State of Israel. This study examines language policy in Arab schools in Israel from 1948 until the present time in light of the unique experience of the Palestinian Arab homeland minority in Israel with a particular focus on questions of politics and identity. The establishment of the State of Israel triggered far-reaching political, social and educational transformations within Arab Palestinian society in Israel, including in the area of language and language studies. Since 1948, the linguistic repertoire of Palestinian Arabs in Israel has become more complex and diverse, while the place and status of different languages have changed. Following the establishment of the State of Israel, only Hebrew and Arabic were retained as the official languages, and Israeli policy reflected this in schools as well: with the advent of the Jewish state, Hebrew language education among Palestinians in Israel has increased. Similarly, in Arab Palestinian schools in Israel, English is taught as a third language, Hebrew as a second language, and Arabic as a first language – even though it has become less important to native Arabic speakers. This research focuses on language studies and language policy in the Arab school system in Israel from 1948 onwards. It will analyze the relative focus of language education between the different languages, the rationale of various language education policies, and the pedagogic approach used to teach each language and student achievements vis-à-vis language skills. This study seeks to understand the extent to which Arab schools in Israel are multi-lingual by examining successes, challenges and difficulties in acquiring the respective languages. This qualitative study will analyze five different components of language education policy: (1) curriculum, (2) learning materials; (3) assessment; (4) interviews and (5) archives. Firstly, it consists of an analysis examining language education curricula, learning materials and assessments used in Arab schools in Israel from 1948-2018 including a selection of language textbooks for the compulsory years of study and the final matriculation (Bagrut) examinations. The findings will also be based on archival material which traces the evolution of language education policy in Arabic schools in Israel from the years 1948-2018. This archival research, furthermore, will reveal power relations and general decision-making in the field of the Arabic education system in Israel. The research will also include interviews with Ministry of Education staff who provide instructional oversight in the instruction of the three languages in the Arabic education system in Israel. These interviews will shed light on the goals of language education as understood by those who are in charge of implementing policy.

Keywords: language education policy, languages, multilingualism, language education, educational policy, identity, Palestinian-Arabs, Arabs in Israel, educational school system

Procedia PDF Downloads 66
250 Lifting Body Concepts for Unmanned Fixed-Wing Transport Aircrafts

Authors: Anand R. Nair, Markus Trenker

Abstract:

Lifting body concepts were conceived as early as 1917 and patented by Roy Scroggs. It was an idea of using the fuselage as a lift producing body with no or small wings. Many of these designs were developed and even flight tested between 1920’s to 1970’s, but it was not pursued further for commercial flight as at lower airspeeds, such a configuration was incapable to produce sufficient lift for the entire aircraft. The concept presented in this contribution is combining the lifting body design along with a fixed wing to maximise the lift produced by the aircraft. Conventional aircraft fuselages are designed to be aerodynamically efficient, which is to minimise the drag; however, these fuselages produce very minimal or negligible lift. For the design of an unmanned fixed wing transport aircraft, many of the restrictions which are present for commercial aircraft in terms of fuselage design can be excluded, such as windows for the passengers/pilots, cabin-environment systems, emergency exits, and pressurization systems. This gives new flexibility to design fuselages which are unconventionally shaped to contribute to the lift of the aircraft. The two lifting body concepts presented in this contribution are targeting different applications: For a fast cargo delivery drone, the fuselage is based on a scaled airfoil shape with a cargo capacity of 500 kg for euro pallets. The aircraft has a span of 14 m and reaches 1500 km at a cruising speed of 90 m/s. The aircraft could also easily be adapted to accommodate pilot and passengers with modifications to the internal structures, but pressurization is not included as the service ceiling envisioned for this type of aircraft is limited to 10,000 ft. The next concept to be investigated is called a multi-purpose drone, which incorporates a different type of lifting body and is a much more versatile aircraft as it will have a VTOL capability. The aircraft will have a wingspan of approximately 6 m and flight speeds of 60 m/s within the same service ceiling as the fast cargo delivery drone. The multi-purpose drone can be easily adapted for various applications such as firefighting, agricultural purposes, surveillance, and even passenger transport. Lifting body designs are not a new concept, but their effectiveness in terms of cargo transportation has not been widely investigated. Due to their enhanced lift producing capability, lifting body designs enable the reduction of the wing area and the overall weight of the aircraft. This will, in turn, reduce the thrust requirement and ultimately the fuel consumption. The various designs proposed in this contribution will be based on the general aviation category of aircrafts and will be focussed on unmanned methods of operation. These unmanned fixed-wing transport drones will feature appropriate cargo loading/unloading concepts which can accommodate large size cargo for efficient time management and ease of operation. The various designs will be compared in performance to their conventional counterpart to understand their benefits/shortcomings in terms of design, performance, complexity, and ease of operation. The majority of the performance analysis will be carried out using industry relevant standards in computational fluid dynamics software packages.

Keywords: lifting body concept, computational fluid dynamics, unmanned fixed-wing aircraft, cargo drone

Procedia PDF Downloads 203
249 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 529
248 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes

Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand

Abstract:

Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.

Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing

Procedia PDF Downloads 40
247 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 124
246 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 248
245 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 385
244 Corporate Governance and Disclosure Practices of Listed Companies in the ASEAN: A Conceptual Overview

Authors: Chen Shuwen, Nunthapin Chantachaimongkol

Abstract:

Since the world has moved into a transitional period, known as globalization; the business environment is now more complicated than ever before. Corporate information has become a matter of great importance for stakeholders, in order to understand the current situation. As a result of this, the concept of corporate governance has been broadly introduced to manage and control the affairs of corporations while businesses are required to disclose both financial and non-financial information to public via various communication channels such as the annual report, the financial report, the company’s website, etc. However, currently there are several other issues related to asymmetric information such as moral hazard or adverse selection that still occur intensively in workplaces. To prevent such problems in the business, it is required to have an understanding of what factors strengthen their transparency, accountability, fairness, and responsibility. Under aforementioned arguments, this paper aims to propose a conceptual framework that enables an investigation on how corporate governance mechanism influences disclosure efficiency of listed companies in the Association of Southeast Asia Nations (ASEAN) and the factors that should be considered for further development of good behaviors, particularly in regards to voluntary disclosure practices. To achieve its purpose, extensive reviews of literature are applied as a research methodology. It is divided into three main steps. Firstly, the theories involved with both corporate governance and disclosure practices such as agency theory, contract theory, signaling theory, moral hazard theory, and information asymmetry theory are examined to provide theoretical backgrounds. Secondly, the relevant literatures based on multi- perspectives of corporate governance, its attributions and their roles on business processes, the influences of corporate governance mechanisms on business performance, and the factors determining corporate governance characteristics as well as capability are reviewed to outline the parameters that should be included in the proposed model. Thirdly, the well-known regulatory document OECD principles and previous empirical studies on the corporate disclosure procedures are evaluated to identify the similarities and differentiations with the disclosure patterns in the ASEAN. Following the processes and consequences of the literature review, abundant factors and variables are found. Further to the methodology, additional critical factors that also have an impact on the disclosure behaviors are addressed in two groups. In the first group, the factors which are linked to the national characteristics - the quality of national code, legal origin, culture, the level of economic development, and so forth. Whereas in the second group, the discoveries which refer to the firm’s characteristics - ownership concentration, ownership’s rights, controlling group, and so on. However, because of research limitations, only some literature are chosen and summarized to form part of the conceptual framework that explores the relationship between corporate governance and the disclosure practices of listed companies in ASEAN.

Keywords: corporate governance, disclosure practice, ASEAN, listed company

Procedia PDF Downloads 176
243 Trafficking of Women and Children and Solutions to Combat It: The Case of Nigeria

Authors: Olatokunbo Yakeem

Abstract:

Human trafficking is a crime against gross violations of human rights. Trafficking in persons is a severe socio-economic dilemma that affects the national and international dimensions. Human trafficking or modern-day-slavery emanated from slavery, and it has been in existence before the 6ᵗʰ century. Today, no country is exempted from dehumanizing human beings, and as a result, it has been an international issue. The United Nations (UN) presented the International Protocol to fight human trafficking worldwide, which brought about the international definition of human trafficking. The protocol is to prevent, suppress, and punish trafficking in persons, especially women and children. The trafficking protocol has a link with transnational organised crime rather than migration. Over a hundred and fifty countries nationwide have enacted their criminal and panel code trafficking legislation from the UN trafficking protocol. Sex trafficking is the most common type of exploitation of women and children. Other forms of this crime involve exploiting vulnerable victims through forced labour, child involvement in warfare, domestic servitude, debt bondage, and organ removal for transplantation. Trafficking of women and children into sexual exploitation represents the highest form of human trafficking than other types of exploitation. Trafficking of women and children can either happen internally or across the border. It affects all kinds of people, regardless of their race, social class, culture, religion, and education levels. However, it is more of a gender-based issue against females. Furthermore, human trafficking can lead to life-threatening infections, mental disorders, lifetime trauma, and even the victim's death. The study's significance is to explore why the root causes of women and children trafficking in Nigeria are based around poverty, entrusting children in the hands of relatives and friends, corruption, globalization, weak legislation, and ignorance. The importance of this study is to establish how the national, regional, and international organisations are using the 3P’s Protection, Prevention, and Prosecution) to tackle human trafficking. The methodology approach for this study will be a qualitative paradigm. The rationale behind this selection is that the qualitative method will identify the phenomenon and interpret the findings comprehensively. The data collection will take the form of semi-structured in-depth interviews through telephone and email. The researcher will use a descriptive thematic analysis to analyse the data by using complete coding. In summary, this study aims to recommend to the Nigerian federal government to include human trafficking as a subject in their educational curriculum for early intervention to prevent children from been coerced by criminal gangs. And the research aims to find the root causes of women and children trafficking. Also, to look into the effectiveness of the strategies in place to eradicate human trafficking globally. In the same vein, the research objective is to investigate how the anti-trafficking bodies such as law enforcement and NGOs collaborate to tackle the upsurge in human trafficking.

Keywords: children, Nigeria, trafficking, women

Procedia PDF Downloads 164
242 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 136
241 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 126
240 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 272
239 Performance Assessment of Ventilation Systems for Operating Theatres

Authors: Clemens Bulitta, Sasan Sadrizadeh, Sebastian Buhl

Abstract:

Introduction: Ventilation technology in operating theatres (OT)is internationally regulated by dif-ferent standards, which define basic specifications for technical equipment and many times also the necessary operating and performance parameters. This confronts the operators of healthcare facilities with the question of finding the best ventilation and air conditioning system for the OT in order to achieve the goal of a large and robust surgicalworkzone with appropriate air quality and climate for patient safety and occupational health. Additionally, energy consumption and the potential need for clothing that limits transmission of bacteria must be considered as well as the total life cycle cost. However, the evaluation methodology of ventilation systems regarding these matters are still a topic of discussion. To date, there are neither any uniform standardized specifications nor any common validation criteria established. Thus, this study aimed to review data in the literature and add ourown research results to compare and assess the performance of different ventilations systems regarding infection preventive effects, energy efficiency, and staff comfort. Methods: We have conducted a comprehensive literature review on OT ventilation-related topics to understand the strengths and limitations of different ventilation systems. Furthermore, data from experimental assessments on OT ventilation systems at the University of Amberg-Weidenin Germany were in-cluded to comparatively assess the performance of Laminar Airflow (LAF), Turbulent Mixing Air-flow(TMA), and Temperature-controlled Airflow (TcAF) with regards to patient and occupational safety as well as staff comfort including indoor climate.CFD simulations from the Royal Institute of Technology in Sweden (KTH) were also studied to visualize the differences between these three kinds of ventilation systems in terms of the size of the surgical workzone, resilience to obstacles in the airflow, and energy use. Results: A variety of ventilation concepts are in use in the OT today. Each has its advantages and disadvantages, and thus one may be better suited than another depend-ing on the built environment and clinical workflow. Moreover, the proper functioning of OT venti-lation is also affected by multiple external and internal interfering factors. Based on the available data TcAF and LAF seem to provide the greatest effects regarding infection control and minimizing airborne risks for surgical site infections without the need for very tight surgical clothing systems. Resilience to obstacles, staff comfort, and energy efficiency seem to be favourable with TcAF. Conclusion: Based on literature data in current publications and our studies at the Technical Uni-versity of Applied Sciences Amberg-Weidenand the Royal Institute of Technoclogy, LAF and TcAF are more suitable for minimizing the risk for surgical site infections leading to improved clin-ical outcomes. Nevertheless, regarding the best management of thermal loads, atmosphere, energy efficiency, and occupational safety, overall results and data suggest that TcAF systems could pro-vide the economically most efficient and clinically most effective solution under routine clinical conditions.

Keywords: ventilation systems, infection control, energy efficiency, operating theatre, airborne infection risks

Procedia PDF Downloads 79
238 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 159
237 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review

Authors: Hanan Algarni

Abstract:

Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.

Keywords: virtual reality, treadmill, stroke, gait rehabilitation

Procedia PDF Downloads 260
236 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 550
235 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 282
234 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 314
233 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding

Authors: Amir E. Amirzadeh, Richard K. Strand

Abstract:

Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.

Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making

Procedia PDF Downloads 43
232 Dragonflies (Odonata) Reflect Climate Warming Driven Changes in High Mountain Invertebrates Populations

Authors: Nikola Góral, Piotr Mikołajczuk, Paweł Buczyński

Abstract:

Much scientific research in the last 20 years has focused on the influence of global warming on the distribution and phenology of living organisms. Three potential responses to climate change are predicted: individual species may become extinct, adapt to new conditions in their existing range or change their range by migrating to places where climatic conditions are more favourable. It means not only migration to areas in other latitudes, but also different altitudes. In the case of dragonflies (Odonata), monitoring in Western Europe has shown that in response to global warming, dragonflies tend to change their range to a more northern one. The strongest response to global warming is observed in arctic and alpine species, as well as in species capable of migrating over long distances. The aim of the research was to assess whether the fauna of aquatic insects in high-mountain habitats has changed as a result of climate change and, if so, how big and what type these changes are. Dragonflies were chosen as a model organism because of their fast reaction to changes in the environment: they have high migration abilities and short life cycle. The state of the populations of boreal-mountain species and the extent to which lowland species entered high altitudes was assessed. The research was carried out on 20 sites in Western Sudetes, Southern Poland. They were located at an altitude of between 850 and 1250 m. The selected sites were representative of many types of valuable alpine habitats (subalpine raised bog, transitional spring bog, habitats associated with rivers and mountain streams). Several sites of anthropogenic origin were also selected. Thanks to this selection, a wide characterization of the fauna of the Karkonosze was made and it was compared whether the studied processes proceeded differently, depending on whether the habitat is primary or secondary. Both imagines and larvae were examined (by taking hydrobiological samples with a kick-net), and exuviae were also collected. Individual species dragonflies were characterized in terms of their reproductive, territorial and foraging behaviour. During each inspection, the basic physicochemical parameters of the water were measured. The population of the high-mountain dragonfly Somatochlora alpestris turned out to be in a good condition. This species was noted at several sites. Some of those sites were situated relatively low (995 m AMSL), which proves that the thermal conditions at the lower altitudes might be still optimal for this species. The protected by polish law species Somatochlora arctica, Aeshna subarctica and Leucorrhinia albifrons, as well as strongly associated with bogs Leucorrhinia dubia and Aeshna juncea bogs were observed. However, they were more frequent and more numerous in habitats of anthropogenic origin, which may suggest minor changes in the habitat preferences of dragonflies. The subject requires further research and observations over a longer time scale.

Keywords: alpine species, bioindication, global warming, habitat preferences, population dynamics

Procedia PDF Downloads 118
231 From Biowaste to Biobased Products: Life Cycle Assessment of VALUEWASTE Solution

Authors: Andrés Lara Guillén, José M. Soriano Disla, Gemma Castejón Martínez, David Fernández-Gutiérrez

Abstract:

The worldwide population is exponentially increasing, which causes a rising demand for food, energy and non-renewable resources. These demands must be attended to from a circular economy point of view. Under this approach, the obtention of strategic products from biowaste is crucial for the society to keep the current lifestyle reducing the environmental and social issues linked to the lineal economy. This is the main objective of the VALUEWASTE project. VALUEWASTE is about valorizing urban biowaste into proteins for food and feed and biofertilizers, closing the loop of this waste stream. In order to achieve this objective, the project validates three value chains, which begin with the anaerobic digestion of the biowaste. From the anaerobic digestion, three by-products are obtained: i) methane that is used by microorganisms, which will be transformed into microbial proteins; ii) digestate that is used by black soldier fly, producing insect proteins; and iii) a nutrient-rich effluent, which will be transformed into biofertilizers. VALUEWASTE is an innovative solution, which combines different technologies to valorize entirely the biowaste. However, it is also required to demonstrate that the solution is greener than other traditional technologies (baseline systems). On one hand, the proteins from microorganisms and insects will be compared with other reference protein production systems (gluten, whey and soybean). On the other hand, the biofertilizers will be compared to the production of mineral fertilizers (ammonium sulphate and synthetic struvite). Therefore, the aim of this study is to provide that biowaste valorization can reduce the environmental impacts linked to both traditional proteins manufacturing processes and mineral fertilizers, not only at a pilot-scale but also at an industrial one. In the present study, both baseline system and VALUEWASTE solution are evaluated through the Environmental Life Cycle Assessment (E-LCA). The E-LCA is based on the standards ISO 14040 and 14044. The Environmental Footprint methodology was the one used in this study to evaluate the environmental impacts. The results for the baseline cases show that the food proteins coming from whey have the highest environmental impact on ecosystems compared to the other proteins sources: 7.5 and 15.9 folds higher than soybean and gluten, respectively. Comparing feed soybean and gluten, soybean has an environmental impact on human health 195.1 folds higher. In the case of biofertilizers, synthetic struvite has higher impacts than ammonium sulfate: 15.3 (ecosystems) and 11.8 (human health) fold, respectively. The results shown in the present study will be used as a reference to demonstrate the better environmental performance of the bio-based products obtained through the VALUEWASTE solution. Other originalities that the E-LCA performed in the VALUEWASTE project provides are the diverse direct implications on investment and policies. On one hand, better environmental performance will serve to remove the barriers linked to these kinds of technologies, boosting the investment that is backed by the E-LCA. On the other hand, it will be a germ to design new policies fostering these types of solutions to achieve two of the key targets of the European Community: being self-sustainable and carbon neutral.

Keywords: anaerobic digestion, biofertilizers, circular economy, nutrients recovery

Procedia PDF Downloads 70
230 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 287
229 Developing Granular Sludge and Maintaining High Nitrite Accumulation for Anammox to Treat Municipal Wastewater High-efficiently in a Flexible Two-stage Process

Authors: Zhihao Peng, Qiong Zhang, Xiyao Li, Yongzhen Peng

Abstract:

Nowadays, conventional nitrogen removal process (nitrification and denitrification) was adopted in most wastewater treatment plants, but many problems have occurred, such as: high aeration energy consumption, extra carbon sources dosage and high sludge treatment costs. The emergence of anammox has bring about the great revolution to the nitrogen removal technology, and only the ammonia and nitrite were required to remove nitrogen autotrophically, no demand for aeration and sludge treatment. However, there existed many challenges in anammox applications: difficulty of biomass retention, insufficiency of nitrite substrate, damage from complex organic etc. Much effort was put into the research in overcoming the above challenges, and the payment was rewarded. It was also imperative to establish an innovative process that can settle the above problems synchronously, after all any obstacle above mentioned can cause the collapse of anammox system. Therefore, in this study, a two-stage process was established that the sequencing batch reactor (SBR) and upflow anaerobic sludge blanket (UASB) were used in the pre-stage and post-stage, respectively. The domestic wastewater entered into the SBR first and went through anaerobic/aerobic/anoxic (An/O/A) mode, and the draining at the aerobic end of SBR was mixed with domestic wastewater, the mixture then entering to the UASB. In the long term, organic and nitrogen removal performance was evaluated. All along the operation, most COD was removed in pre-stage (COD removal efficiency > 64.1%), including some macromolecular organic matter, like: tryptophan, tyrosinase and fulvic acid, which could weaken the damage of organic matter to anammox. And the An/O/A operating mode of SBR was beneficial to the achievement and maintenance of partial nitrification (PN). Hence, sufficient and steady nitrite supply was another favorable condition to anammox enhancement. Besides, the flexible mixing ratio helped to gain a substrate ratio appropriate to anammox (1.32-1.46), which further enhance the anammox. Further, the UASB was used and gas recirculation strategy was adopted in the post-stage, aiming to achieve granulation by the selection pressure. As expected, the granules formed rapidly during 38 days, which increased from 153.3 to 354.3 μm. Based on bioactivity and gene measurement, the anammox metabolism and abundance level rose evidently, by 2.35 mgN/gVss·h and 5.3 x109. The anammox bacteria mainly distributed in the large granules (>1000 μm), while the biomass in the flocs (<200 μm) and microgranules (200-500 μm) barely displayed anammox bioactivity. Enhanced anammox promoted the advanced autotrophic nitrogen removal, which increased from 71.9% to 93.4%, even when the temperature was only 12.9 ℃. Therefore, it was feasible to enhance anammox in the multiple favorable conditions created, and the strategy extended the application of anammox to the full-scale mainstream, enhanced the understanding of anammox in the aspects of culturing conditions.

Keywords: anammox, granules, nitrite accumulation, nitrogen removal efficiency

Procedia PDF Downloads 18