Search results for: attraction standards
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2103

Search results for: attraction standards

93 The Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) Process: An Audit of Its Utilisation on a UK Tertiary Specialist Intensive Care Unit

Authors: Gokulan Vethanayakam, Daniel Aston

Abstract:

Introduction: The ReSPECT process supports healthcare professionals when making patient-centered decisions in the event of an emergency. It has been widely adopted by the NHS in England and allows patients to express thoughts and wishes about treatments and outcomes that they consider acceptable. It includes (but is not limited to) cardiopulmonary resuscitation decisions. ReSPECT conversations should ideally occur prior to ICU admission and should be documented in the eight sections of the nationally-standardised ReSPECT form. This audit evaluated the use of ReSPECT on a busy cardiothoracic ICU in an NHS Trust where established policies advocating its use exist. Methods: This audit was a retrospective review of ReSPECT forms for a sample of high-risk patients admitted to ICU at the Royal Papworth Hospital between January 2021 and March 2022. Patients all received one of the following interventions: Veno-Venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) for severe respiratory failure (retrieved via the national ECMO service); cardiac or pulmonary transplantation-related surgical procedures (including organ transplants and Ventricular Assist Device (VAD) implantation); or elective non-transplant cardiac surgery. The quality of documentation on ReSPECT forms was evaluated using national standards and a graded ranking tool devised by the authors which was used to assess narrative aspects of the forms. Quality was ranked as A (excellent) to D (poor). Results: Of 230 patients (74 VV-ECMO, 104 transplant, 52 elective non-transplant surgery), 43 (18.7%) had a ReSPECT form and only one (0.43%) patient had a ReSPECT form completed prior to ICU admission. Of the 43 forms completed, 38 (88.4%) were completed due to the commencement of End of Life (EoL) care. No non-transplant surgical patients included in the audit had a ReSPECT form. There was documentation of balance of care (section 4a), CPR status (section 4c), capacity assessment (section 5), and patient involvement in completing the form (section 6a) on all 43 forms. Of the 34 patients assessed as lacking capacity to make decisions, only 22 (64.7%) had reasons documented. Other sections were variably completed; 29 (67.4%) forms had relevant background information included to a good standard (section 2a). Clinical guidance for the patient (section 4b) was given in 25 (58.1%), of which 11 stated the rationale that underpinned it. Seven forms (16.3%) contained information in an inappropriate section. In a comparison of ReSPECT forms completed ahead of an EoL trigger with those completed when EoL care began, there was a higher number of entries in section 3 (considering patient’s values/fears) that were assessed at grades A-B in the former group (p = 0.014), suggesting higher quality. Similarly, forms from the transplant group contained higher quality information in section 3 than those from the VV-ECMO group (p = 0.0005). Conclusions: Utilisation of the ReSPECT process in high-risk patients is yet to be well-adopted in this trust. Teams who meet patients before hospital admission for transplant or high-risk surgery should be encouraged to engage with the ReSPECT process at this point in the patient's journey. VV-ECMO retrieval teams should consider ReSPECT conversations with patients’ relatives at the time of retrieval.

Keywords: audit, critical care, end of life, ICU, ReSPECT, resuscitation

Procedia PDF Downloads 50
92 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 141
91 Lifting Body Concepts for Unmanned Fixed-Wing Transport Aircrafts

Authors: Anand R. Nair, Markus Trenker

Abstract:

Lifting body concepts were conceived as early as 1917 and patented by Roy Scroggs. It was an idea of using the fuselage as a lift producing body with no or small wings. Many of these designs were developed and even flight tested between 1920’s to 1970’s, but it was not pursued further for commercial flight as at lower airspeeds, such a configuration was incapable to produce sufficient lift for the entire aircraft. The concept presented in this contribution is combining the lifting body design along with a fixed wing to maximise the lift produced by the aircraft. Conventional aircraft fuselages are designed to be aerodynamically efficient, which is to minimise the drag; however, these fuselages produce very minimal or negligible lift. For the design of an unmanned fixed wing transport aircraft, many of the restrictions which are present for commercial aircraft in terms of fuselage design can be excluded, such as windows for the passengers/pilots, cabin-environment systems, emergency exits, and pressurization systems. This gives new flexibility to design fuselages which are unconventionally shaped to contribute to the lift of the aircraft. The two lifting body concepts presented in this contribution are targeting different applications: For a fast cargo delivery drone, the fuselage is based on a scaled airfoil shape with a cargo capacity of 500 kg for euro pallets. The aircraft has a span of 14 m and reaches 1500 km at a cruising speed of 90 m/s. The aircraft could also easily be adapted to accommodate pilot and passengers with modifications to the internal structures, but pressurization is not included as the service ceiling envisioned for this type of aircraft is limited to 10,000 ft. The next concept to be investigated is called a multi-purpose drone, which incorporates a different type of lifting body and is a much more versatile aircraft as it will have a VTOL capability. The aircraft will have a wingspan of approximately 6 m and flight speeds of 60 m/s within the same service ceiling as the fast cargo delivery drone. The multi-purpose drone can be easily adapted for various applications such as firefighting, agricultural purposes, surveillance, and even passenger transport. Lifting body designs are not a new concept, but their effectiveness in terms of cargo transportation has not been widely investigated. Due to their enhanced lift producing capability, lifting body designs enable the reduction of the wing area and the overall weight of the aircraft. This will, in turn, reduce the thrust requirement and ultimately the fuel consumption. The various designs proposed in this contribution will be based on the general aviation category of aircrafts and will be focussed on unmanned methods of operation. These unmanned fixed-wing transport drones will feature appropriate cargo loading/unloading concepts which can accommodate large size cargo for efficient time management and ease of operation. The various designs will be compared in performance to their conventional counterpart to understand their benefits/shortcomings in terms of design, performance, complexity, and ease of operation. The majority of the performance analysis will be carried out using industry relevant standards in computational fluid dynamics software packages.

Keywords: lifting body concept, computational fluid dynamics, unmanned fixed-wing aircraft, cargo drone

Procedia PDF Downloads 192
90 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 524
89 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 122
88 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 246
87 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 379
86 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 135
85 Performance Assessment of Ventilation Systems for Operating Theatres

Authors: Clemens Bulitta, Sasan Sadrizadeh, Sebastian Buhl

Abstract:

Introduction: Ventilation technology in operating theatres (OT)is internationally regulated by dif-ferent standards, which define basic specifications for technical equipment and many times also the necessary operating and performance parameters. This confronts the operators of healthcare facilities with the question of finding the best ventilation and air conditioning system for the OT in order to achieve the goal of a large and robust surgicalworkzone with appropriate air quality and climate for patient safety and occupational health. Additionally, energy consumption and the potential need for clothing that limits transmission of bacteria must be considered as well as the total life cycle cost. However, the evaluation methodology of ventilation systems regarding these matters are still a topic of discussion. To date, there are neither any uniform standardized specifications nor any common validation criteria established. Thus, this study aimed to review data in the literature and add ourown research results to compare and assess the performance of different ventilations systems regarding infection preventive effects, energy efficiency, and staff comfort. Methods: We have conducted a comprehensive literature review on OT ventilation-related topics to understand the strengths and limitations of different ventilation systems. Furthermore, data from experimental assessments on OT ventilation systems at the University of Amberg-Weidenin Germany were in-cluded to comparatively assess the performance of Laminar Airflow (LAF), Turbulent Mixing Air-flow(TMA), and Temperature-controlled Airflow (TcAF) with regards to patient and occupational safety as well as staff comfort including indoor climate.CFD simulations from the Royal Institute of Technology in Sweden (KTH) were also studied to visualize the differences between these three kinds of ventilation systems in terms of the size of the surgical workzone, resilience to obstacles in the airflow, and energy use. Results: A variety of ventilation concepts are in use in the OT today. Each has its advantages and disadvantages, and thus one may be better suited than another depend-ing on the built environment and clinical workflow. Moreover, the proper functioning of OT venti-lation is also affected by multiple external and internal interfering factors. Based on the available data TcAF and LAF seem to provide the greatest effects regarding infection control and minimizing airborne risks for surgical site infections without the need for very tight surgical clothing systems. Resilience to obstacles, staff comfort, and energy efficiency seem to be favourable with TcAF. Conclusion: Based on literature data in current publications and our studies at the Technical Uni-versity of Applied Sciences Amberg-Weidenand the Royal Institute of Technoclogy, LAF and TcAF are more suitable for minimizing the risk for surgical site infections leading to improved clin-ical outcomes. Nevertheless, regarding the best management of thermal loads, atmosphere, energy efficiency, and occupational safety, overall results and data suggest that TcAF systems could pro-vide the economically most efficient and clinically most effective solution under routine clinical conditions.

Keywords: ventilation systems, infection control, energy efficiency, operating theatre, airborne infection risks

Procedia PDF Downloads 74
84 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 281
83 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 312
82 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding

Authors: Amir E. Amirzadeh, Richard K. Strand

Abstract:

Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.

Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making

Procedia PDF Downloads 40
81 From Biowaste to Biobased Products: Life Cycle Assessment of VALUEWASTE Solution

Authors: Andrés Lara Guillén, José M. Soriano Disla, Gemma Castejón Martínez, David Fernández-Gutiérrez

Abstract:

The worldwide population is exponentially increasing, which causes a rising demand for food, energy and non-renewable resources. These demands must be attended to from a circular economy point of view. Under this approach, the obtention of strategic products from biowaste is crucial for the society to keep the current lifestyle reducing the environmental and social issues linked to the lineal economy. This is the main objective of the VALUEWASTE project. VALUEWASTE is about valorizing urban biowaste into proteins for food and feed and biofertilizers, closing the loop of this waste stream. In order to achieve this objective, the project validates three value chains, which begin with the anaerobic digestion of the biowaste. From the anaerobic digestion, three by-products are obtained: i) methane that is used by microorganisms, which will be transformed into microbial proteins; ii) digestate that is used by black soldier fly, producing insect proteins; and iii) a nutrient-rich effluent, which will be transformed into biofertilizers. VALUEWASTE is an innovative solution, which combines different technologies to valorize entirely the biowaste. However, it is also required to demonstrate that the solution is greener than other traditional technologies (baseline systems). On one hand, the proteins from microorganisms and insects will be compared with other reference protein production systems (gluten, whey and soybean). On the other hand, the biofertilizers will be compared to the production of mineral fertilizers (ammonium sulphate and synthetic struvite). Therefore, the aim of this study is to provide that biowaste valorization can reduce the environmental impacts linked to both traditional proteins manufacturing processes and mineral fertilizers, not only at a pilot-scale but also at an industrial one. In the present study, both baseline system and VALUEWASTE solution are evaluated through the Environmental Life Cycle Assessment (E-LCA). The E-LCA is based on the standards ISO 14040 and 14044. The Environmental Footprint methodology was the one used in this study to evaluate the environmental impacts. The results for the baseline cases show that the food proteins coming from whey have the highest environmental impact on ecosystems compared to the other proteins sources: 7.5 and 15.9 folds higher than soybean and gluten, respectively. Comparing feed soybean and gluten, soybean has an environmental impact on human health 195.1 folds higher. In the case of biofertilizers, synthetic struvite has higher impacts than ammonium sulfate: 15.3 (ecosystems) and 11.8 (human health) fold, respectively. The results shown in the present study will be used as a reference to demonstrate the better environmental performance of the bio-based products obtained through the VALUEWASTE solution. Other originalities that the E-LCA performed in the VALUEWASTE project provides are the diverse direct implications on investment and policies. On one hand, better environmental performance will serve to remove the barriers linked to these kinds of technologies, boosting the investment that is backed by the E-LCA. On the other hand, it will be a germ to design new policies fostering these types of solutions to achieve two of the key targets of the European Community: being self-sustainable and carbon neutral.

Keywords: anaerobic digestion, biofertilizers, circular economy, nutrients recovery

Procedia PDF Downloads 69
80 Feasibility of Washing/Extraction Treatment for the Remediation of Deep-Sea Mining Trailings

Authors: Kyoungrean Kim

Abstract:

Importance of deep-sea mineral resources is dramatically increasing due to the depletion of land mineral resources corresponding to increasing human’s economic activities. Korea has acquired exclusive exploration licenses at four areas which are the Clarion-Clipperton Fracture Zone in the Pacific Ocean (2002), Tonga (2008), Fiji (2011) and Indian Ocean (2014). The preparation for commercial mining of Nautilus minerals (Canada) and Lockheed martin minerals (USA) is expected by 2020. The London Protocol 1996 (LP) under International Maritime Organization (IMO) and International Seabed Authority (ISA) will set environmental guidelines for deep-sea mining until 2020, to protect marine environment. In this research, the applicability of washing/extraction treatment for the remediation of deep-sea mining tailings was mainly evaluated in order to present preliminary data to develop practical remediation technology in near future. Polymetallic nodule samples were collected at the Clarion-Clipperton Fracture Zone in the Pacific Ocean, then stored at room temperature. Samples were pulverized by using jaw crusher and ball mill then, classified into 3 particle sizes (> 63 µm, 63-20 µm, < 20 µm) by using vibratory sieve shakers (Analysette 3 Pro, Fritsch, Germany) with 63 µm and 20 µm sieve. Only the particle size 63-20 µm was used as the samples for investigation considering the lower limit of ore dressing process which is tens to 100 µm. Rhamnolipid and sodium alginate as biosurfactant and aluminum sulfate which are mainly used as flocculant were used as environmentally friendly additives. Samples were adjusted to 2% liquid with deionized water then mixed with various concentrations of additives. The mixture was stirred with a magnetic bar during specific reaction times and then the liquid phase was separated by a centrifugal separator (Thermo Fisher Scientific, USA) under 4,000 rpm for 1 h. The separated liquid was filtered with a syringe and acrylic-based filter (0.45 µm). The extracted heavy metals in the filtered liquid were then determined using a UV-Vis spectrometer (DR-5000, Hach, USA) and a heat block (DBR 200, Hach, USA) followed by US EPA methods (8506, 8009, 10217 and 10220). Polymetallic nodule was mainly composed of manganese (27%), iron (8%), nickel (1.4%), cupper (1.3 %), cobalt (1.3%) and molybdenum (0.04%). Based on remediation standards of various countries, Nickel (Ni), Copper (Cu), Cadmium (Cd) and Zinc (Zn) were selected as primary target materials. Throughout this research, the use of rhamnolipid was shown to be an effective approach for removing heavy metals in samples originated from manganese nodules. Sodium alginate might also be one of the effective additives for the remediation of deep-sea mining tailings such as polymetallic nodules. Compare to the use of rhamnolipid and sodium alginate, aluminum sulfate was more effective additive at short reaction time within 4 h. Based on these results, sequencing particle separation, selective extraction/washing, advanced filtration of liquid phase, water treatment without dewatering and solidification/stabilization may be considered as candidate technologies for the remediation of deep-sea mining tailings.

Keywords: deep-sea mining tailings, heavy metals, remediation, extraction, additives

Procedia PDF Downloads 132
79 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 157
78 Geochemistry and Tectonic Framework of Malani Igneous Suite and Their Effect on Groundwater Quality of Tosham, India

Authors: Naresh Kumar, Savita Kumari, Naresh Kochhar

Abstract:

The objective of the study was to assess the role of mineralogy and subsurface structure on water quality of Tosham, Malani Igneous Suite (MIS), Western Rajasthan, India. MIS is the largest (55,000 km2) A-type, anorogenic and high heat producing acid magmatism in the peninsular India and owes its origin to hot spot tectonics. Apart from agricultural and industrial wastes, geogenic activities cause fluctuations in quality parameters of water resources. Twenty water samples (20) selected from Tosham and surrounding areas were analyzed for As, Pb, B, Al, Zn, Fe, Ni using Inductive coupled plasma emission and F by Ion Chromatography. The concentration of As, Pb, B, Ni and F was above the stipulated level specified by BIS (Bureau of Indian Standards IS-10500, 2012). The concentration of As and Pb in surrounding areas of Tosham ranged from 1.2 to 4.1 mg/l and from 0.59 to 0.9 mg/l respectively which is higher than limits of 0.05mg/l (As) and 0.01 mg/l (Pb). Excess trace metal accumulation in water is toxic to humans and adversely affects the central nervous system, kidneys, gastrointestinal tract, skin and cause mental confusion. Groundwater quality is defined by nature of rock formation, mineral water reaction, physiography, soils, environment, recharge and discharge conditions of the area. Fluoride content in groundwater is due to the solubility of fluoride-bearing minerals like fluorite, cryolite, topaz, and mica, etc. Tosham is comprised of quartz mica schist, quartzite, schorl, tuff, quartz porphyry and associated granites, thus, fluoride is leached out and dissolved in groundwater. In the study area, Ni concentration ranged from 0.07 to 0.5 mg/l (permissible limit 0.02 mg/l). The primary source of nickel in drinking water is leached out nickel from ore-bearing rocks. Higher concentration of As is found in some igneous rocks specifically containing minerals as arsenopyrite (AsFeS), realgar (AsS) and orpiment (As2S3). MIS consists of granite (hypersolvus and subsolvus), rhyolite, dacite, trachyte, andesite, pyroclasts, basalt, gabbro and dolerite which increased the trace elements concentration in groundwater. Nakora, a part of MIS rocks has high concentration of trace and rare earth elements (Ni, Rb, Pb, Sr, Y, Zr, Th, U, La, Ce, Nd, Eu and Yb) which percolates the Ni and Pb to groundwater by weathering, contacts and joints/fractures in rocks. Additionally, geological setting of MIS also causes dissolution of trace elements in water resources beneath the surface. NE–SW tectonic lineament, radial pattern of dykes and volcanic vent at Nakora created a way for leaching of these elements to groundwater. Rain water quality might be altered by major minerals constituents of host Tosham rocks during its percolation through the rock fracture, joints before becoming the integral part of groundwater aquifer. The weathering process like hydration, hydrolysis and solution might be the cause of change in water chemistry of particular area. These studies suggest that geological relation of soil-water horizon with MIS rocks via mineralogical variations, structures and tectonic setting affects the water quality of the studied area.

Keywords: geochemistry, groundwater, malani igneous suite, tosham

Procedia PDF Downloads 186
77 Global News Coverage of the Pandemic: Towards an Ethical Framework for Media Professionalism

Authors: Anantha S. Babbili

Abstract:

This paper analyzes the current media practices dominant in global journalistic practices within the framework of world press theories of Libertarian, Authoritarian, Communist, and Social Responsibility to evaluate their efficacy in addressing their role in the coverage of the coronavirus, also known as COVID-19. The global media flows, determinants of news coverage, and international awareness and the Western view of the world will be critically analyzed within the context of the prevalent news values that underpin free press and media coverage of the world. While evaluating the global discourse paramount to a sustained and dispassionate understanding of world events, this paper proposes an ethical framework that brings clarity devoid of sensationalism, partisanship, right-wing and left-wing interpretations to a breaking and dangerous development of a pandemic. As the world struggles to contain the coronavirus pandemic with death climbing close to 6,000 from late January to mid-March, 2020, the populations of the developed as well as the developing nations are beset with news media renditions of the crisis that are contradictory, confusing and evoking anxiety, fear and hysteria. How are we to understand differing news standards and news values? What lessons do we as journalism and mass media educators, researchers, and academics learn in order to construct a better news model and structure of media practice that addresses science, health, and media literacy among media practitioners, journalists, and news consumers? As traditional media struggles to cover the pandemic to its audience and consumers, social media from which an increasing number of consumers get their news have exerted their influence both in a positive way and in a negative manner. Even as the world struggles to grasp the full significance of the pandemic, the World Health Organization (WHO) has been feverishly battling an additional challenge related to the pandemic in what it termed an 'infodemic'—'an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.' There is, indeed, a need for journalism and news coverage in times of pandemics that reflect social responsibility and ethos of public service journalism. Social media and high-tech information corporations, collectively termed GAMAF—Google, Apple, Microsoft, Amazon, and Facebook – can team up with reliable traditional media—newspapers, magazines, book publishers, radio and television corporates—to ease public emotions and be helpful in times of a pandemic outbreak. GAMAF can, conceivably, weed out sensational and non-credible sources of coronavirus information, exotic cures offered for sale on a quick fix, and demonetize videos that exploit peoples’ vulnerabilities at the lowest ebb. Credible news of utility delivered in a sustained, calm, and reliable manner serves people in a meaningful and helpful way. The world’s consumers of news and information, indeed, deserve a healthy and trustworthy news media – at least in the time of pandemic COVID-19. Towards this end, the paper will propose a practical model for news media and journalistic coverage during times of a pandemic.

Keywords: COVID-19, international news flow, social media, social responsibility

Procedia PDF Downloads 82
76 A Community Solution to Address Extensive Nitrate Contamination in the Lower Yakima Valley Aquifer

Authors: Melanie Redding

Abstract:

Historic widespread nitrate contamination of the Lower Yakima Valley aquifer in Washington State initiated a community-based effort to reduce nitrate concentrations to below-drinking water standards. This group commissioned studies on characterizing local nitrogen sources, deep soil assessments, drinking water, and assessing nitrate concentrations at the water table. Nitrate is the most prevalent groundwater contaminant with common sources from animal and human waste, fertilizers, plants and precipitation. It is challenging to address groundwater contamination when common sources, such as agriculture, on-site sewage systems, and animal production, are widespread. Remediation is not possible, so mitigation is essential. The Lower Yakima Valley is located over 175,000 acres, with a population of 56,000 residents. Approximately 25% of the population do not have access to safe, clean drinking water, and 20% of the population is at or below the poverty level. Agriculture is the primary economic land-use activity. Irrigated agriculture and livestock production make up the largest percentage of acreage and nitrogen load. Commodities include apples, grapes, hops, dairy, silage corn, triticale, alfalfa and cherries. These commodities are important to the economic viability of the residents of the Lower Yakima Valley, as well as Washington State. Mitigation of nitrate in groundwater is challenging. The goal is to ensure everyone has safe drinking water. There are no easy remedies due to the extensive and pervasiveness of the contamination. Monitoring at the water table indicates that 45% of the 30 spatially distributed monitoring wells exceeded the drinking water standard. This indicates that there are multiple sources that are impacting water quality. Washington State has several areas which have extensive groundwater nitrate contamination. The groundwater in these areas continues to degrade over time. However, the Lower Yakima Valley is being successful in addressing this health issue because of the following reasons: the community is engaged and committed; there is one common goal; there has been extensive public education and outreach to citizens; and generating credible data using sound scientific methods. Work in this area is continuing as an ambient groundwater monitoring network is established to assess the condition of the aquifer over time. Nitrate samples are being collected from 170 wells, spatially distributed across the aquifer. This research entails quarterly sampling for two years to characterize seasonal variability and then continue annually afterward. This assessment will provide the data to statistically determine trends in nitrate concentrations across the aquifer, over time. Thirty-three of these wells are monitoring wells that are screened across the aquifer. The water quality from these wells are indicative of activities at the land surface. Additional work is being conducted to identify land use management practices that are effective in limiting nitrate migration through the soil column. Tracking nitrate in the soil column every season is an important component of bridging land-use practices with the fate and transport of nitrate through the subsurface. Patience, tenacity, and the ability to think outside the box are essential for dealing with widespread nitrate contamination of groundwater.

Keywords: community, groundwater, monitoring, nitrate

Procedia PDF Downloads 148
75 Exposing The Invisible

Authors: Kimberley Adamek

Abstract:

According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.

Keywords: wind engineering, climate, visualization, architectural aerodynamics

Procedia PDF Downloads 337
74 Providing Leadership in Nigerian University Education Research Enterprise: The Imperative of Research Ethics

Authors: O. O. Oku, K. S. Jerry-Alagbaoso

Abstract:

It is universally acknowledged that the primary function of universities is the generation and dissemination of knowledge. This mission is pursued through the research component of the university programme especially at the post-graduate level. The senior academic staff teach, supervise and provide general academic leadership to post-graduate students who are expected to carry out research leading to the presentation of dissertation as requirement for the award of doctoral degree in their various disciplines. Carrying out the research enterprises involves a lot of corroboration among individuals and communities. The need to safeguard the interest of everyone involved in the enterprise makes the development of ethical standard in research imperative. Ensuring the development and effective application of such ethical standard falls within the leadership role of the vice –chancellors, Deans of post-graduate schools/ faculties, Heads of Departments and supervisors. It is the relevance and application of such ethical standard in Nigerian university research efforts that this study discussed. The study adopted the descriptive research design. A researcher-made 4 point rating scale was used to elicit information from the post-graduate dissertation supervisors sampled from one university each from the six geo-political zones in Nigeria using the purposive sampling technique. The data collected was analysed using the mean score and standard deviation. The findings of the study include among others that there are several cases of unethical practices by Ph.D dissertation students in Nigerian universities. Prominent among these include duplicating research topics, making unauthorized copies of data paper or computer programme, failing to acknowledge contributions of relevant people and authors, rigging an experiment to prempt the result among others. Some of the causes of the unethical practices according to the respondents include inadequate funding of universities resulting in inadequate remuneration for university teachers, inadequacy of equipment and infrastructures, poor supervision of Ph.D students,’ poverty on the side of the student researchers and non-application of sanctions on violators. Improved funding of the Nigerian universities system with emphasis on both staff and student research efforts, admitting academic oriented students into the Ph.D programme and ensuring the application of appropriate sanctions in cases of unethical conduct in research featured prominently in the needed leadership imperatives. Based on the findings of the study, the researchers recommend the development of university research policies that is closely tied to each university’s strategic plan. Such plan should explain the research focus that will attract more funding and direct students interest towards it without violating the principle of academic freedom. The plan should also incorporate the establishment of a research administration office to provide the necessary link between the students and funding agencies and also organise training for supervisors on leadership activities expected of them while educating students on the processes involved in carrying out a qualitative and acceptable research study. Such exercise should include the ethical principles and guidelines that comprise all parts of research from research topic through the literature review to the design and the truthful reporting of results.

Keywords: academic leadership, ethical standards, research stakeholders, research enterprise

Procedia PDF Downloads 211
73 DSF Elements in High-Rise Timber Buildings

Authors: Miroslav Premrov, Andrej Štrukelj, Erika Kozem Šilih

Abstract:

The utilization of prefabricated timber-wall elements with double glazing, called as double-skin façade element (DSF), represents an innovative structural approach in the context of new high-rise timber construction, simultaneously combining sustainable solutions with improved energy efficiency and living quality. In addition to the minimum energy needs of buildings, the design of modern buildings is also increasingly focused on the optimal indoor comfort, in particular on sufficient natural light indoors. An optimally energy-designed building with an optimal layout of glazed areas around the building envelope represents a great potential in modern timber construction. Usually, all these transparent façade elements, because of energy benefits, are primary asymmetrical oriented and if they are considered as non-resisting against a horizontal load impact, a strong torsion effects in the building can appear. The problem of structural stability against a strong horizontal load impact of such modern timber buildings especially increase in a case of high-rise structures where additional bracing elements have to be used. In such a case, special diagonal bracing systems or other bracing solutions with common timber wall elements have to be incorporated into the structure of the building to satisfy all prescribed resisting requirements given by the standards. However, all such structural solutions are usually not environmentally friendly and also not contribute to an improved living comfort, or they are not accepted by the architects at all. Consequently, it is a special need to develop innovative load-bearing timber-glass wall elements which are in the same time environmentally friendly, can increase internal comfort in the building, but are also load-bearing. The new developed load-bearing DSF elements can be a good answer on all these requirements. Timber-glass façade elements DSF wall elements consist of two transparent layers, thermal-insulated three-layered glass pane on the internal side and an additional single-layered glass pane on the external side of the wall. The both panes are separated by an air channel which can be of any dimensions and can have a significant influence on the thermal insulation or acoustic response of such a wall element. Most already published studies on DSF elements primarily deal only with energy and LCA solutions and do not address any structural problems. In previous studies according to experimental analysis and mathematical modeling it was already presented a possible benefit of such load-bearing DSF elements, especially comparing with previously developed load-bearing single-skin timber wall elements, but they were not applicate yet in any high-rise timber structure. Therefore, in the presented study specially selected 10-storey prefabricated timber building constructed in a cross-laminated timber (CLT) structural wall system is analyzed using the developed DSF elements in a sense to increase a structural lateral stability of the whole building. The results evidently highlight the importance the load-bearing DSF elements, as their incorporation can have a significant impact on the overall behavior of the structure through their influence on the stiffness properties. Taking these considerations into account is crucial to ensure compliance with seismic design codes and to improve the structural resilience of high-rise timber buildings.

Keywords: glass, high-rise buildings, numerical analysis, timber

Procedia PDF Downloads 16
72 Hydraulic Headloss in Plastic Drainage Pipes at Full and Partially Full Flow

Authors: Velitchko G. Tzatchkov, Petronilo E. Cortes-Mejia, J. Manuel Rodriguez-Varela, Jesus Figueroa-Vazquez

Abstract:

Hydraulic headloss, expressed by the values of friction factor f and Manning’s coefficient n, is an important parameter in designing drainage pipes. Their values normally are taken from manufacturer recommendations, many times without sufficient experimental support. To our knowledge, currently there is no standard procedure for hydraulically testing such pipes. As a result of research carried out at the Mexican Institute of Water Technology, a laboratory testing procedure was proposed and applied on 6 and 12 inches diameter polyvinyl chloride (PVC) and high-density dual wall polyethylene pipe (HDPE) drainage pipes. While the PVC pipe is characterized by naturally smooth interior and exterior walls, the dual wall HDPE pipe has corrugated exterior wall and, although considered smooth, a slightly wavy interior wall. The pipes were tested at full and partially full pipe flow conditions. The tests for full pipe flow were carried out on a 31.47 m long pipe at flow velocities between 0.11 and 4.61 m/s. Water was supplied by gravity from a 10 m-high tank in some of the tests, and from a 3.20 m-high tank in the rest of the tests. Pressure was measured independently with piezometer readings and pressure transducers. The flow rate was measured by an ultrasonic meter. For the partially full pipe flow the pipe was placed inside an existing 49.63 m long zero slope (horizontal) channel. The flow depth was measured by piezometers located along the pipe, for flow rates between 2.84 and 35.65 L/s, measured by a rectangular weir. The observed flow profiles were then compared to computer generated theoretical gradually varied flow profiles for different Manning’s n values. It was found that Manning’s n, that normally is assumed constant for a given pipe material, is in fact dependent on flow velocity and pipe diameter for full pipe flow, and on flow depth for partially full pipe flow. Contrary to the expected higher values of n and f for the HDPE pipe, virtually the same values were obtained for the smooth interior wall PVC pipe and the slightly wavy interior wall HDPE pipe. The explanation of this fact was found in Henry Morris’ theory for smooth turbulent conduit flow over isolated roughness elements. Following Morris, three categories of the flow regimes are possible in a rough conduit: isolated roughness (or semi smooth turbulent) flow, wake interference (or hyper turbulent) flow, and skimming (or quasi-smooth) flow. Isolated roughness flow is characterized by friction drag turbulence over the wall between the roughness elements, independent vortex generation, and dissipation around each roughness element. In this regime, the wake and vortex generation zones at each element develop and dissipate before attaining the next element. The longitudinal spacing of the roughness elements and their height are important influencing agents. Given the slightly wavy form of the HDPE pipe interior wall, the flow for this type of pipe belongs to this category. Based on that theory, an equation for the hydraulic friction factor was obtained. The obtained coefficient values are going to be used in the Mexican design standards.

Keywords: drainage plastic pipes, hydraulic headloss, hydraulic friction factor, Manning’s n

Procedia PDF Downloads 252
71 Tensile Behaviours of Sansevieria Ehrenbergii Fiber Reinforced Polyester Composites with Water Absorption Time

Authors: T. P. Sathishkumar, P. Navaneethakrishnan

Abstract:

The research work investigates the variation of tensile properties for the sansevieria ehrenbergii fiber (SEF) and SEF reinforced polyester composites respect to various water absorption time. The experiments were conducted according to ATSM D3379-75 and ASTM D570 standards. The percentage of water absorption for composite specimens was measured according to ASTM D570 standard. The fiber of SE was cut in to 30 mm length for preparation of the composites. The simple hand lay-up method followed by compression moulding process adopted to prepare the randomly oriented SEF reinforced polyester composites at constant fiber weight fraction of 40%. The surface treatment was done on the SEFs with various chemicals such as NaOH, KMnO4, Benzoyl Peroxide, Benzoyl Chloride and Stearic Acid before preparing the composites. NaOH was used for pre-treatment of all other chemical treatments. The morphology of the tensile fractured specimens studied using the Scanning Electron Microscopic. The tensile strength of the SEF and SEF reinforced polymer composites were carried out with various water absorption time such as 4, 8, 12, 16, 20 and 24 hours respectively. The result shows that the tensile strength was drop off with increase in water absorption time for all composites. The highest tensile property of raw fiber was found due to lowest moistures content. Also the chemical bond between the cellulose and cementic materials such as lignin and wax was highest due to lowest moisture content. Tensile load was lowest and elongation was highest for the water absorbed fibers at various water absorption time ranges. During this process, the fiber cellulose inhales the water and expands the primary and secondary fibers walls. This increases the moisture content in the fibers. Ultimately this increases the hydrogen cation and the hydroxide anion from the water. In tensile testing, the water absorbed fibers shows highest elongation by stretching of expanded cellulose walls and the bonding strength between the fiber cellulose is low. The load carrying capability was stable at 20 hours of water absorption time. This could be directly affecting the interfacial bonding between the fiber/matrix and composite strength. The chemically treated fibers carry higher load and lower elongation which is due to removal of lignin, hemicellulose and wax content. The water time absorption decreases the tensile strength of the composites. The chemically SEF reinforced composites shows highest tensile strength compared to untreated SEF reinforced composites. This was due to highest bonding area between the fiber/matrix. This was proven in the morphology at the fracture zone of the composites. The intra-fiber debonding was occurred by water capsulation in the fiber cellulose. Among all, the tensile strength was found to be highest for KMnO4 treated SEF reinforced composite compared to other composites. This was due to better interfacial bonding between the fiber-matrix compared to other treated fiber composites. The percentage of water absorption of composites increased with time of water absorption. The percentage weight gain of chemically treated SEF composites at 4 hours to zero water absorption are 9, 9, 10, 10.8 and 9.5 for NaOH, BP, BC, KMnO4 and SA respectively. The percentage weight gain of chemically treated SEF composites at 24 hours to zero water absorption 5.2, 7.3, 12.5, 16.7 and 13.5 for NaOH, BP, BC, KMnO4 and SA respectively. Hence the lowest weight gain was found for KMnO4 treated SEF composites by highest percentage with lowest water uptake. However the chemically treated SEF reinforced composites is possible materials for automotive application like body panels, bumpers and interior parts, and household application like tables and racks etc.

Keywords: fibres, polymer-matrix composites (PMCs), mechanical properties, scanning electron microscopy (SEM)

Procedia PDF Downloads 389
70 Regulatory Governance as a De-Parliamentarization Process: A Contextual Approach to Global Constitutionalism and Its Effects on New Arab Legislatures

Authors: Abderrahim El Maslouhi

Abstract:

The paper aims to analyze an often-overlooked dimension of global constitutionalism, which is the rise of the regulatory state and its impact on parliamentary dynamics in transition regimes. In contrast to Majone’s technocratic vision of convergence towards a single regulatory system based on competence and efficiency, national transpositions of regulatory governance and, in general, the relationship to global standards primarily depend upon a number of distinctive parameters. These include policy formation process, speed of change, depth of parliamentary tradition and greater or lesser vulnerability to the normative conditionality of donors, interstate groupings and transnational regulatory bodies. Based on a comparison between three post-Arab Spring countries -Morocco, Tunisia, and Egypt, whose constitutions have undergone substantive review in the period 2011-2014- and some European Union state members, the paper intends, first, to assess the degree of permeability to global constitutionalism in different contexts. A noteworthy divide emerges from this comparison. Whereas European constitutions still seem impervious to the lexicon of global constitutionalism, the influence of the latter is obvious in the recently drafted constitutions in Morocco, Tunisia, and Egypt. This is evidenced by their reference to notions such as ‘governance’, ‘regulators’, ‘accountability’, ‘transparency’, ‘civil society’, and ‘participatory democracy’. Second, the study will provide a contextual account of internal and external rationales underlying the constitutionalization of regulatory governance in the cases examined. Unlike European constitutionalism, where parliamentarism and the tradition of representative government function as a structural mechanism that moderates the de-parliamentarization effect induced by global constitutionalism, Arab constitutional transitions have led to a paradoxical situation; contrary to the public demands for further parliamentarization, the 2011 constitution-makers have opted for a de-parliamentarization pattern. This is particularly reflected in the procedures established by constitutions and regular legislation, to handle the interaction between lawmakers and regulatory bodies. Once the ‘constitutional’ and ‘independent’ nature of these agencies is formally endorsed, the birth of these ‘fourth power’ entities, which are neither elected nor directly responsible to elected officials, will raise the question of their accountability. Third, the paper shows that, even in the three selected countries, the de-parliamentarization intensity is significantly variable. By contrast to the radical stance of the Moroccan and Egyptian constituents who have shown greater concern to shield regulatory bodies from legislatures’ scrutiny, the Tunisian case indicates a certain tendency to provide lawmakers with some essential control instruments (e. g. exclusive appointment power, adversarial discussion of regulators’ annual reports, dismissal power, later held unconstitutional). In sum, the comparison reveals that the transposition of the regulatory state model and, more generally, sensitivity to the legal implications of global conditionality essentially relies on the evolution of real-world power relations at both national and international levels.

Keywords: Arab legislatures, de-parliamentarization, global constitutionalism, normative conditionality, regulatory state

Procedia PDF Downloads 109
69 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 81
68 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease

Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette

Abstract:

Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.

Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment

Procedia PDF Downloads 317
67 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 147
66 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 151
65 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 191
64 Squaring the Triangle: A Stumpian Solution to the Major Frictions that Exist between Pragmatism, Religion, and Moral Progress; Richard Bernstein, Cornel West, and Hans-Georg Gadamer Re-Examined

Authors: Martin Bloomfield

Abstract:

This paper examines frictions that lie at the heart of any pragmatist conception of religion and moral progress. I take moral progress to require the ability to correctly analyse social problems, provide workable solutions to these problems, and then rationally justify the analyses and solutions used. I take religion here to involve, as a minimal requirement, belief in the existence of God, a god, or gods, such that they are recognisable to most informed observers within the Western tradition. I take pragmatism to belong to, and borrow from, the philosophical traditions of non-absolutism, anti-realism, historicism, and voluntarism. For clarity, the relevant brands of each of these traditions will be examined during the paper. The friction identified in the title may be summed up as follows: those who, like Cornel West (and, when he was alive, Hilary Putnam), are theistic pragmatists with an interest in realising moral progress, have all been aware of a problem inherent in their positions. Assuming it can be argued that religion and moral progress are compatible, a non-absolutist, anti-realist, historicist position nevertheless raises problems that, as Leon Wieseltier pointed out, the pragmatist still believes in a God who isn’t real, and that the truth of any religious statement (including “God exists”) is relative not to any objective reality but to communities of engaged interlocutors; and that, where there are no absolute standards of right and wrong, any analysis of (and solution to) social problems can only be rationally justified relative to one or another community or moral and epistemic framework. Attempts made to universalise these frameworks, notably by Dewey, Gadamer, and Bernstein, through democracy and hermeneutics, fall into either a vicious and infinite regress, or (taking inspiration from Habermas) the problem of moral truths being decided through structures of power. The paper removes this friction by highlighting the work of Christian pragmatist Cornel West through the lens of the philosopher of religion Eleanore Stump. While West recognises that for the pragmatist, the correctness of any propositions about God or moral progress is impossible to rationally justify to any outside the religious, moral or epistemic framework of the speakers themselves without, as he calls it, a ‘locus of truth’ (which is itself free from the difficulties Dewey, Gadamer and Bernstein fall victim to), Stump identifies routes to knowledge which provide such a locus while avoiding the problems of relativism, power dynamics, and regress. She describes “Dominican” and “Franciscan” knowledge (roughly characterised as “propositional” and “non-propositional”), and uses this distinction to identify something Bernstein saw as missing from Gadamer: culture-independent norms, upon which universal agreement can be built. The “Franciscan knowledge” Stump identifies as key is second-personal knowledge of Christ. For West, this allows the knower to access vital culture-independent norms. If correct, instead of the classical view (religion is incompatible with pragmatism), Christianity becomes key to pragmatist knowledge and moral-knowledge claims. Rather than being undermined by pragmatism, Christianity enables pragmatists to make moral and epistemic claims, free from troubling power dynamics and cultural relativism.

Keywords: Cornel West, Cultural Relativism, Gadamer, Philosophy of Religion, Pragmatism

Procedia PDF Downloads 176