Search results for: operational risks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2901

Search results for: operational risks

111 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case

Authors: Teresa Carlone, Matteo Mannocchi

Abstract:

In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.

Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement

Procedia PDF Downloads 89
110 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS

Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert

Abstract:

The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.

Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF

Procedia PDF Downloads 135
109 Seek First to Regulate, Then to Understand: The Case for Preemptive Regulation of Robots

Authors: Catherine McWhorter

Abstract:

Robotics is a fast-evolving field lacking comprehensive and harm-mitigating regulation; it also lacks critical data on how human-robot interaction (HRI) may affect human psychology. As most anthropomorphic robots are intended as substitutes for humans, this paper asserts that the commercial robotics industry should be preemptively regulated at the federal level such that robots capable of embodying a victim role in criminal scenarios (“vicbots”) are prohibited until clinical studies determine their effects on the user and society. The results of these studies should then inform more permanent legislation that strives to mitigate risks of harm without infringing upon fundamental rights or stifling innovation. This paper explores these concepts through the lens of the sex robot industry. The sexbot industry offers some of the most realistic, interactive, and customizable robots for sale today. From approximately 2010 until 2017, some sex robot producers, such as True Companion, actively promoted ‘vicbot’ culture with personalities like “Frigid Farrah” and “Young Yoko” but received significant public backlash for fetishizing rape and pedophilia. Today, “Frigid Farrah” and “Young Yoko” appear to have vanished. Sexbot producers have replaced preprogrammed vicbot personalities in favor of one generic, customizable personality. According to the manufacturer ainidoll.com, when asked, there is only one thing the user won’t be able to program the sexbot to do – “…give you drama”. The ability to customize vicbot personas is possible with today’s generic personality sexbots and may undermine the intent of some current legislative efforts. Current debate on the effects of vicbots indicates a lack of consensus. Some scholars suggest vicbots may reduce the rate of actual sex crimes, and some suggest that vicbots will, in fact, create sex criminals, while others cite their potential for rehabilitation. Vicbots may have value in some instances when prescribed by medical professionals, but the overall uncertainty and lack of data further underscore the need for preemptive regulation and clinical research. Existing literature on exposure to media violence and its effects on prosocial behavior, human aggression, and addiction may serve as launch points for specific studies into the hyperrealism of vicbots. Of course, the customization, anthropomorphism and artificial intelligence of sexbots, and therefore more mainstream robots, will continue to evolve. The existing sexbot industry offers an opportunity to preemptively regulate and to research answers to these and many more questions before this type of technology becomes even more advanced and mainstream. Robots pose complicated moral, ethical, and legal challenges, most of which are beyond the scope of this paper. By examining the possibility for custom vicbots via the sexbots industry, reviewing existing literature on regulation, media violence, and vicbot user effects, this paper strives to underscore the need for preemptive federal regulation prohibiting vicbot capabilities in robots while advocating for further research into the potential for the user and societal harm by the same.

Keywords: human-robot interaction effects, regulation, research, robots

Procedia PDF Downloads 170
108 An Odyssey to Sustainability: The Urban Archipelago of India

Authors: B. Sudhakara Reddy

Abstract:

This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.

Keywords: benchmark, city, indicator, performance, sustainability

Procedia PDF Downloads 247
107 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 124
106 The Stability of Vegetable-Based Synbiotic Drink during Storage

Authors: Camelia Vizireanu, Daniela Istrati, Alina Georgiana Profir, Rodica Mihaela Dinica

Abstract:

Globally, there is a great interest in promoting the consumption of fruit and vegetables to improve health. Due to the content of essential compounds such as antioxidants, important amounts of fruits and vegetables should be included in the daily diet. Juices are good sources of vitamins and can also help increase overall fruit and vegetable consumption. Starting from this trend (introduction into the daily diet of vegetables and fruits) as well as the desire to diversify the range of functional products for both adults and children, a fermented juice was made using probiotic microorganisms based on root vegetables, with potential beneficial effects in the diet of children, vegetarians and people with lactose intolerance. The three vegetables selected for this study, red beet, carrot, and celery bring a significant contribution to functional compounds such as carotenoids, flavonoids, betalain, vitamin B and C, minerals and fiber. By fermentation, the functional value of the vegetable juice increases due to the improved stability of these compounds. The combination of probiotic microorganisms and vegetable fibers resulted in a nutrient-rich synbiotic product. The stability of the nutritional and sensory qualities of the obtained synbiotic product has been tested throughout its shelf life. The evaluation of the physico-chemical changes of the synbiotic drink during storage confirmed that: (i) vegetable juice enriched with honey and vegetable pulp is an important source of nutritional compounds, especially carbohydrates and fiber; (ii) microwave treatment used to inhibit pathogenic microflora did not significantly affect nutritional compounds in vegetable juice, vitamin C concentration remained at baseline and beta-carotene concentration increased due to increased bioavailability; (iii) fermentation has improved the nutritional quality of vegetable juice by increasing the content of B vitamins, polyphenols and flavonoids and has a good antioxidant capacity throughout the shelf life; (iv) the FTIR and Raman spectra have highlighted the results obtained using physicochemical methods. Based on the analysis of IR absorption frequencies, the most striking bands belong to the frequencies 3330 cm⁻¹, 1636 cm⁻¹ and 1050 cm⁻¹, specific for groups of compounds such as polyphenols, carbohydrates, fatty acids, and proteins. Statistical data processing revealed a good correlation between the content of flavonoids, betalain, β-carotene, ascorbic acid and polyphenols, the fermented juice having a stable antioxidant activity. Also, principal components analysis showed that there was a negative correlation between the evolution of the concentration of B vitamins and antioxidant activity. Acknowledgment: This study has been founded by the Francophone University Agency, Project Réseau régional dans le domaine de la santé, la nutrition et la sécurité alimentaire (SaIN), No. at Dunarea de Jos University of Galati 21899/ 06.09.2017 and by the Sectorial Operational Programme Human Resources Development of the Romanian Ministry of Education, Research, Youth and Sports trough the Financial Agreement POSDRU/159/1.5/S/132397 ExcelDOC.

Keywords: bioactive compounds, fermentation, synbiotic drink from vegetables, stability during storage

Procedia PDF Downloads 127
105 Mechanical Transmission of Parasites by Cockroaches’ Collected from Urban Environment of Lahore, Pakistan

Authors: Hafsa Memona, Farkhanda Manzoor

Abstract:

Cockroaches are termed as medically important pests because of their wide distribution in human habitation including houses, hospitals, food industries and kitchens. They may harbor multiple drug resistant pathogenic bacteria and protozoan parasites on their external surfaces, disseminate on human food and cause serious diseases and allergies to human. Hence, they are regarded as mechanical vector in human habitation due to their nocturnal activity and nutritional behavior. Viable eggs and dormant cysts of parasites can hitch a ride on cockroaches. Ova and cysts of parasitic organism may settle into the crevices and cracks between thorax and head. There are so many fissures and clefts and crannies on a cockroach which provide site for these organisms. This study aimed with identifying role of cockroaches in mechanically transmitting and disseminating gastrointestinal parasites in two environmental settings; hospitals and houses in urban area of Lahore. Totally, 250 adult cockroaches were collected from houses and hospitals by sticky traps and food baited traps and screened for parasitic load. All cockroaches were captured during their feeding time in natural habitat. Direct wet smear, 1% lugols iodine and modified acid-fast bacilli staining were used to identify the parasites from the body surfaces of cockroaches. Among human habitation two common species of cockroaches were collected i.e. P. americana and B. germanica. The results showed that 112 (46.8%) cockroaches harbored at least one human intestinal parasite on their body surfaces. The cockroaches from hospital environment harboured more parasites than houses. 47 (33.57%) cockroaches from houses and 65 (59.09%) from hospitals were infected with parasitic organisms. Of these, 76 (67.85%) were parasitic protozoans and 36(32.15%) were pathogenic and non-pathogenic intestinal parasites. P. americana harboured more parasites as compared to B. germanica in both environment. Most common human intestinal parasites found on cockroaches include ova of Ascaris lumbricoides (giant roundworm), Trichuris trichura (whipworm), Anchylostoma deodunalae (hookworm), Enterobius vermicularis (pinworm), Taenia spp. and Strongyloides stercoralis (threadworm). The cysts of protozoans’ parasites including Balantidium coli, Entomoeba hystolitica, C. parvum, Isospora belli, Giardia duodenalis and C. cayetenensis were isolated and identified from cockroaches. Both experimental sites were significantly different in carriage of parasitic load on cockroaches. Difference in the hygienic condition of the environments, including human excrement disposal, variable habitat interacted, indoor and outdoor species, may account for the observed variation in the parasitic carriage rate of cockroaches among different experimental site. Thus a finding of this study is that Cockroaches are uniformly distributed in human habitation and act as a mechanical vector of pathogenic parasites that cause common illness such as diarrhea and bowel disorders. This fact contributes to epidemiological chain therefore control of cockroaches will significantly lessen the prevalence of illness in human. Effective control strategies will reduce the public health burden of the gastro-intestinal parasites in the developing countries.

Keywords: cockroaches, health risks, hospitals, houses, parasites, protozoans, transmission

Procedia PDF Downloads 260
104 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 248
103 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India

Authors: Sujoy Khan

Abstract:

There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.

Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis

Procedia PDF Downloads 171
102 Addressing the Gap in Health and Wellbeing Evidence for Urban Real Estate Brownfield Asset Management Social Needs and Impact Analysis Using Systems Mapping Approach

Authors: Kathy Pain, Nalumino Akakandelwa

Abstract:

The study explores the potential to fill a gap in health and wellbeing evidence for purposeful urban real estate asset management to make investment a powerful force for societal good. Part of a five-year programme investigating the root causes of unhealthy urban development funded by the United Kingdom Prevention Research Partnership (UKPRP), the study pilots the use of a systems mapping approach to identify drivers and barriers to the incorporation of health and wellbeing evidence in urban brownfield asset management decision-making. Urban real estate not only provides space for economic production but also contributes to the quality of life in the local community. Yet market approaches to urban land use have, until recently, insisted that neo-classical technology-driven efficient allocation of economic resources should inform acquisition, operational, and disposal decisions. Buildings in locations with declining economic performance have thus been abandoned, leading to urban decay. Property investors are recognising the inextricable connection between sustainable urban production and quality of life in local communities. The redevelopment and operation of brownfield assets recycle existing buildings, minimising embodied carbon emissions. It also retains established urban spaces with which local communities identify and regenerate places to create a sense of security, economic opportunity, social interaction, and quality of life. Social implications of urban real estate on health and wellbeing and increased adoption of benign sustainability guidance in urban production are driving the need to consider how they affect brownfield real estate asset management decisions. Interviews with real estate upstream decision-makers in the study, find that local social needs and impact analysis is becoming a commercial priority for large-scale urban real estate development projects. Evidence of the social value-added of proposed developments is increasingly considered essential to secure local community support and planning permissions, and to attract sustained inward long-term investment capital flows for urban projects. However, little is known about the contribution of population health and wellbeing to socially sustainable urban projects and the monetary value of the opportunity this presents to improve the urban environment for local communities. We report early findings from collaborations with two leading property companies managing major investments in brownfield urban assets in the UK to consider how the inclusion of health and wellbeing evidence in social valuation can inform perceptions of brownfield development social benefit for asset managers, local communities, public authorities and investors for the benefit of all parties. Using holistic case studies and systems mapping approaches, we explore complex relationships between public health considerations and asset management decisions in urban production. Findings indicate a strong real estate investment industry appetite and potential to include health as a vital component of sustainable real estate social value creation in asset management strategies.

Keywords: brownfield urban assets, health and wellbeing, social needs and impact, social valuation, sustainable real estate, systems mapping

Procedia PDF Downloads 34
101 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 70
100 Rapid Sexual and Reproductive Health Pathways for Women Accessing Drug and Alcohol Treatment

Authors: Molly Parker

Abstract:

Unintended pregnancy rates in Australia are amongst the highest in the developed world. Women with Substance Use Disorder often have riskier sexual behavior with nil contraceptive use and face disproportionately higher unintended pregnancies and Sexually Transmitted Infections, alongside Substance Use in Pregnancy (SUP) climbing at an alarming rate. In an inner-city Drug and Alcohol (D&A) service, significant barriers to sexual and reproductive health services have been identified, aligning with research. Rapid pathways were created for women seeking D&A treatment to be referred to Sexual and Reproductive Health services for the administration of Long-acting reversible contraception (LARC) and sexual health screening. For clients attending a D&A service, this is an opportunistic time to offer sexual and reproductive health services. Collaboration and multidisciplinary team input between D&A and sexual health and reproductive services are paramount, with rapid referral pathways being identified as the main strategy to improve access to sexual and reproductive health support for this population. With this evidence, a rapid referral pathway was created for women using the D&A service to access LARC, particularly in view of fertility often returning once stable on D&A treatment. A closed-ended survey was used for D&A staff to identify gaps in reproductive health knowledge and views of referral accessibility. Results demonstrated a lack of knowledge of contraception and appropriate referral processes. A closed-ended survey for clients was created to establish the need and access to services and to quantify data. A follow-up data collection will be reviewed to access uptake and satisfaction of the intervention from clients. Sexual health screening access was also identified as a deficit, particularly concerning due to the higher rates of STIs in this cohort. A rapid referral pathway will be undergoing implementation, reducing risks of untreated STIS both pre and post-conception. Similarly, pre and post-intervention structured surveys will be used to identify client satisfaction from the pathway. Although currently in progress, the research and pathway aim to be completed by December 2023. This research and implementation of sexual and reproductive health pathways from the D&A service have significant health and well-being benefits to clients and the wider community, including possible fetal/infancy outcomes. Women now have rapid access to sexual and reproductive health services, with the aim of reducing unplanned pregnancies, poor outcomes associated with SUP, client/staff trauma from termination of pregnancy, and client/staff trauma following the assumption of care of the child due to substance use, the financial cost for out of home care as required, the poor outcomes of untreated STIs to the fetus in pregnancy and the spread of STIs in the wider community. As evidence suggests, the implementation of a streamlined referral process is required between D&A and sexual and reproductive health services and has positive feedback from both clinicians and clients in improving care.

Keywords: substance use in pregnancy, drug and alcohol, substance use disorder, sexual health, reproductive health, contraception, long-acting reversible contraception, neonatal abstinence syndrome, FASD, sexually transmitted infections, sexually transmitted infections pregnancy

Procedia PDF Downloads 32
99 Screening for Women with Chorioamnionitis: An Integrative Literature Review

Authors: Allison Herlene Du Plessis, Dalena (R.M.) Van Rooyen, Wilma Ten Ham-Baloyi, Sihaam Jardien-Baboo

Abstract:

Introduction: Women die in pregnancy and childbirth for five main reasons—severe bleeding, infections, unsafe abortions, hypertensive disorders (pre-eclampsia and eclampsia), and medical complications including cardiac disease, diabetes, or HIV/AIDS complicated by pregnancy. In 2015, WHO classified sepsis as the third highest cause for maternal mortalities in the world. Chorioamnionitis is a clinical syndrome of intrauterine infection during any stage of the pregnancy and it refers to ascending bacteria from the vaginal canal up into the uterus, causing infection. While the incidence rates for chorioamnionitis are not well documented, complications related to chorioamnionitis are well documented and midwives still struggle to identify this condition in time due to its complex nature. Few diagnostic methods are available in public health services, due to escalated laboratory costs. Often the affordable biomarkers, such as C-reactive protein CRP, full blood count (FBC) and WBC, have low significance in diagnosing chorioamnionitis. A lack of screening impacts on effective and timeous management of chorioamnionitis, and early identification and management of risks could help to prevent neonatal complications and reduce the subsequent series of morbidities and healthcare costs of infants who are health foci of perinatal infections. Objective: This integrative literature review provides an overview of current best research evidence on the screening of women at risk for chorioamnionitis. Design: An integrative literature review was conducted using a systematic electronic literature search through EBSCOhost, Cochrane Online, Wiley Online, PubMed, Scopus and Google. Guidelines, research studies, and reports in English related to chorioamnionitis from 2008 up until 2020 were included in the study. Findings: After critical appraisal, 31 articles were included. More than one third (67%) of the literature included ranked on the three highest levels of evidence (Level I, II and III). Data extracted regarding screening for chorioamnionitis was synthesized into four themes, namely: screening by clinical signs and symptoms, screening by causative factors of chorioamnionitis, screening of obstetric history, and essential biomarkers to diagnose chorioamnionitis. Key conclusions: There are factors that can be used by midwives to identify women at risk for chorioamnionitis. However, there are a paucity of established sociological, epidemiological and behavioral factors to screen this population. Several biomarkers are available to diagnose chorioamnionitis. Increased Interleukin-6 in amniotic fluid is the better indicator and strongest predictor of histological chorioamnionitis, whereas the available rapid matrix-metalloproteinase-8 test requires further testing. Maternal white blood cells count (WBC) has shown poor selectivity and sensitivity, and C-reactive protein (CRP) thresholds varied among studies and are not ideal for conclusive diagnosis of subclinical chorioamnionitis. Implications for practice: Screening of women at risk for chorioamnionitis by health care providers providing care for pregnant women, including midwives, is important for diagnosis and management before complications arise, particularly in resource-constraint settings.

Keywords: chorioamnionitis, guidelines, best evidence, screening, diagnosis, pregnant women

Procedia PDF Downloads 100
98 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 282
97 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover

Authors: E. O. Okan, A. Kepic, P. Williams

Abstract:

Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.

Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys

Procedia PDF Downloads 305
96 Female Entrepreneurship in the Creative Industry: The Antecedents of Their Ventures' Performance

Authors: Naoum Mylonas, Eugenia Petridou

Abstract:

Objectives: The objectives of this research are firstly, to develop an integrated model of predicting factors to new ventures performance, taking into account certain issues and specificities related to creative industry and female entrepreneurship based on the prior research; secondly, to determine the appropriate measures of venture performance in a creative industry context, drawing upon previous surveys; thirdly, to illustrate the importance of entrepreneurial orientation, networking ties, environment dynamism and access to financial capital on new ventures performance. Prior Work: An extant review of the creative industry literature highlights the special nature of entrepreneurship in this field. Entrepreneurs in creative industry share certain specific characteristics and intensions, such as to produce something aesthetic, to enrich their talents and their creativity, and to combine their entrepreneurial with their artistic orientation. Thus, assessing venture performance and success in creative industry entails an examination of how creative people or artists conceptualize success. Moreover, female entrepreneurs manifest more positive attitudes towards sectors primarily based on creativity, rather than innovation in which males outbalance. As creative industry entrepreneurship based mainly on the creative personality of the creator / artist, a high interest is accrued to examine female entrepreneurship in the creative industry. Hypotheses development: H1a: Female entrepreneurs who are more entrepreneurially-oriented show a higher financial performance. H1b: Female entrepreneurs who are more artistically-oriented show a higher creative performance. H2: Female entrepreneurs who have personality that is more creative perform better. H3: Female entrepreneurs who participate in or belong to networks perform better. H4: Female entrepreneurs who have been consulted by a mentor perform better. Η5a: Female entrepreneurs who are motivated more by pull-factors perform better. H5b: Female entrepreneurs who are motivated more by push-factors perform worse. Approach: A mixed method triangulation design has been adopted for the collection and analysis of data. The data are collected through a structured questionnaire for the quantitative part and through semi-structured interviews for the qualitative part as well. The sample is 293 Greek female entrepreneurs in the creative industry. Main findings: All research hypotheses are accepted. The majority of creative industry entrepreneurs evaluate themselves in creative performance terms rather than financial ones. The individuals who are closely related to traditional arts sectors have no EO but also evaluate themselves highly in terms of venture performance. Creative personality of creators is appeared as the most important predictor of venture performance. Pull factors in accordance with our hypothesis lead to higher levels of performance compared to push factors. Networking and mentoring are viewed as very important, particularly now during the turbulent economic environment in Greece. Implications-Value: Our research provides an integrated model with several moderating variables to predict ventures performance in the creative industry, taking also into account the complicated nature of arts and the way artists and creators define success. At the end, the findings may be used for the appropriate design of educational programs in creative industry entrepreneurship. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Heracleitus II. Investing in knowledge society through the European Social Fund.

Keywords: venture performance, female entrepreneurship, creative industry, networks

Procedia PDF Downloads 241
95 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 53
94 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions

Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis

Abstract:

Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.

Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine

Procedia PDF Downloads 207
93 Novel Aspects of Merger Control Pertaining to Nascent Acquisition: An Analytical Legal Research

Authors: Bhargavi G. Iyer, Ojaswi Bhagat

Abstract:

It is often noted that the value of a novel idea lies in its successful implementation. However, successful implementation requires the nurturing and encouragement of innovation. Nascent competitors are a true representation of innovation in any given industry. A nascent competitor is an entity whose prospective innovation poses a future threat to an incumbent dominant competitor. While a nascent competitor benefits in several ways, it is also exposed significantly and is at greater risk of facing the brunt of exclusionary practises and abusive conduct by dominant incumbent competitors in the industry. This research paper aims to explore the risks and threats faced by nascent competitors and analyse the benefits they accrue as well as the advantages they proffer to the economy; through an analytical, critical study. In such competitive market environments, a rise of the acquisitions of nascent competitors by the incumbent dominants is observed. Therefore, this paper will examine the dynamics of nascent acquisition. Further, this paper hopes to specifically delve into the role of antitrust bodies in regulating nascent acquisition. This paper also aspires to deal with the question how to distinguish harmful from harmless acquisitions in order to facilitate ideal enforcement practice. This paper proposes mechanisms of scrutiny in order to ensure healthy market practises and efficient merger control in the context of nascent acquisitions. Taking into account the scope and nature of the topic, as well as the resources available and accessible, a combination of the methods of doctrinal research and analytical research were employed, utilising secondary sources in order to assess and analyse the subject of research. While legally evaluating the Killer Acquisition theory and the Nascent Potential Acquisition theory, this paper seeks to critically survey the precedents and instances of nascent acquisitions. In addition to affording a compendious account of the legislative framework and regulatory mechanisms in the United States, the United Kingdom, and the European Union; it hopes to suggest an internationally practicable legal foundation for domestic legislation and enforcement to adopt. This paper hopes to appreciate the complexities and uncertainties with respect to nascent acquisitions and attempts to suggest viable and plausible policy measures in antitrust law. It additionally attempts to examine the effects of such nascent acquisitions upon the consumer and the market economy. This paper weighs the argument of shifting the evidentiary burden on to the merging parties in order to improve merger control and regulation and expounds on its discovery of the strengths and weaknesses of the approach. It is posited that an effective combination of factual, legal, and economic analysis of both the acquired and acquiring companies possesses the potential to improve ex post and ex ante merger review outcomes involving nascent companies; thus, preventing anti-competitive practises. This paper concludes with an analysis of the possibility and feasibility of industry-specific identification of anti-competitive nascent acquisitions and implementation of measures accordingly.

Keywords: acquisition, antitrust law, exclusionary practises merger control, nascent competitor

Procedia PDF Downloads 133
92 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 103
91 Breast Cancer Therapy-Related Cardiac Dysfunction Identifying in Kazakhstan: Preliminary Findings of the Cohort Study

Authors: Saule Balmagambetova, Zhenisgul Tlegenova, Saule Madinova

Abstract:

Cardiotoxicity associated with anticancer treatment, now defined as cancer therapy-related cardiac dysfunction (CTRCD), accompanies cancer patients and negatively impacts their survivorship. Currently, a cardio-oncological service is being created in Kazakhstan based on the provisions of the European Society of Cardio-oncology (ESC) Guidelines. In the frames of a pilot project, a cohort study on CTRCD conditions was initiated at the Aktobe Cancer center. One hundred twenty-eight newly diagnosed breast cancer patients started on doxorubicin and/or trastuzumab were recruited. Echocardiography with global longitudinal strain (GLS) assessment, biomarkers panel (cardiac troponin (cTnI), brain natriuretic peptide (BNP), myeloperoxidase (MPO), galectin-3 (Gal-3), D-dimers, C-reactive protein (CRP)), and other tests were performed at baseline and every three months. Patients were stratified by the cardiovascular risks according to the ESC recommendations and allocated into the risk groups during the pre-treatment visit. Of them, 10 (7.8%) patients were assigned to the high-risk group, 48 (37.5%) to the medium-risk group, and 70 (54.7%) to the low-risk group, respectively. High-risk patients have been receiving their cardioprotective treatment from the outset. Patients were also divided by treatment - in the anthracycline-based 83 (64.8%), in trastuzumab- only 13 (10.2%), and in the mixed anthracycline/trastuzumab group 32 individuals (25%), respectively. Mild symptomatic CTRCD was revealed and treated in 2 (1.6%) participants, and a mild asymptomatic variant in 26 (20.5%). Mild asymptomatic conditions are defined as left ventricular ejection fraction (LVEF) ≥50% and further relative reduction in GLS by >15% from baseline and/or a further rise in cardiac biomarkers. The listed biomarkers were assessed longitudinally in repeated-measures linear regression models during 12 months of observation. The associations between changes in biomarkers and CTRCD and between changes in biomarkers and LVEF were evaluated. Analysis by risk groups revealed statistically significant differences in baseline LVEF scores (p 0.001), BNP (p 0.0075), and Gal-3 (p 0.0073). Treatment groups found no statistically significant differences at baseline. After 12 months of follow-up, only LVEF values showed a statistically significant difference by risk groups (p 0.0011). When assessing the temporal changes in the studied parameters for all treatment groups, there were statistically significant changes from visit to visit for LVEF (p 0.003); GLS (p 0.0001); BNP (p<0.00001); MPO (p<0.0001); and Gal-3 (p<0.0001). No moderate or strong correlations were found between the biomarkers values and LVEF, between biomarkers and GLS. Between the biomarkers themselves, a moderate, close to strong correlation was established between cTnI and D-dimer (r 0.65, p<0.05). The dose-dependent effect of anthracyclines has been confirmed: the summary dose has a moderate negative impact on GLS values: -r 0.31 for all treatment groups (p<0.05). The present study found myeloperoxidase as a promising biomarker of cardiac dysfunction in the mixed anthracycline/trastuzumab treatment group. The hazard of CTRCD increased by 24% (HR 1.21; 95% CI 1.01;1.73) per doubling in baseline MPO value (p 0.041). Increases in BNP were also associated with CTRCD (HR per doubling, 1.22; 95% CI 1.12;1.69). No cases of chemotherapy discontinuation due to cardiotoxic complications have been recorded. Further observations are needed to gain insight into the ability of biomarkers to predict CTRCD onset.

Keywords: breast cancer, chemotherapy, cardiotoxicity, Kazakhstan

Procedia PDF Downloads 62
90 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 202
89 The Impact of Improved Grain Storage Technology on Marketing Behaviour and Livelihoods of Maize Farmers: A Randomized Controlled Trial in Ethiopia

Authors: Betelhem M. Negede, Maarten Voors, Hugo De Groote, Bart Minten

Abstract:

Farmers in Ethiopia produce most of their own food during one agricultural season per year. Therefore, they need to use on-farm storage technologies to bridge the lean season and benefit from price arbitrage. Maize stored using traditional storage bags offer no protection from insects and molds, leading to high storage losses. In Ethiopia access to and use of modern storage technologies are still limited, restraining farmers to benefit from local maize price fluctuations. We used a randomized controlled trial among 871 maize farmers to evaluate the impacts of Purdue Improved Crop Storage (PICS) bags, also known as hermetic bags, on storage losses, and especially on behavioral changes with respect to consumption, marketing, and income among maize farmers in Ethiopia. This study builds upon the limited previous experimental research that has tried to understand farmers’ grain storage and post-harvest losses and identify mechanisms behind the persistence of these challenges. Our main hypothesis is that access to PICS bags allows farmers to increase production, storage and maize income. Also delay the length of maize storage, reduce maize post-harvest losses and improve their food security. Our results show that even though farmers received only three PICS bags that represent 10percent of their total maize stored, they delay their length of maize storage for sales by two weeks. However, we find no treatment effect on maize income, suggesting that the arbitrage of two weeks is too small. Also, we do not find any reduction in storage losses due to farmers’ reaction by selling early and by using cheap and readily available but potentially harmful storage chemicals. Looking at the heterogeneity treatment effects between the treatment variable and highland and lowland villages, we find a decrease in the percentage of maize stored by 4 percent in the highland villages. This confirms that location specific factors, such as agro-ecology and proximity to markets are important factors that influence whether and how much of the harvest a farmer stores. These findings highlight the benefits of hermetic storage bags, by allowing farmers to make inter-temporal arbitrage and by reducing potential health risks from storage chemicals. The main policy recommendation that emanates from our study is that postharvest losses reduction throughout the whole value chain is an important pathway to food and income security in Sub-Saharan Africa (SSA). However, future storage loss interventions with hermetic storage technologies should take into account the agro-ecology of the study area and quantify storage losses beyond farmers self-reported losses, such as the count and weigh method. Finally, studies on hermetic storage technologies indicate positive impacts on post-harvest losses and in improving food security, but the adoption and use of these technologies is currently still low in SSA. Therefore, future works on the scaling up of hermetic bags, should consider reasons why farmers only use PICS bags to store grains for consumption, which is usually related to a safety-first approach or due to lack of incentives (higher price from maize not treated with chemicals), and no grain quality check.

Keywords: arbitrage, PICS hermetic bags, post-harvest storage loss, RCT

Procedia PDF Downloads 108
88 Self-Selected Intensity and Discounting Rates of Exercise in Comparison with Food and Money in Healthy Adults

Authors: Tamam Albelwi, Robert Rogers, Hans-Peter Kubis

Abstract:

Background: Exercise is widely acknowledged as a highly important health behavior, which reduces risks related to lifestyle diseases like type 2 diabetes, cardiovascular disease. However, exercise adherence is low in high-risk groups and sedentary lifestyle is more the norm than the exception. Expressed reasons for exercise participation are often based on delayed outcomes related to health threats and benefits but also enjoyment. Whether exercise is perceived as rewarding is well established in animal literature but the evidence is sparse in humans. Additionally, the question how stable any reward is perceived with time delays is an important question influencing decision-making (in favor or against a behavior). For the modality exercise, this has not been examined before. We, therefore, investigated the discounting of pre-established self-selected exercise compared with established rewards of food and money with a computer-based discounting paradigm. We hypothesized that exercise will be discounted like an established reward (food and money); however, we expect that the discounting rate is similar to a consumable reward like food. Additionally, we expected that individuals’ characteristics like preferred intensity, physical activity and body characteristics are associated with discount rates. Methods: 71 participants took part in four sessions. The sessions were designed to let participants select their preferred exercise intensity on a treadmill. Participants were asked to adjust their speed for optimizing pleasantness over an exercise period of up to 30 minutes, heart rate and pleasantness rating was measured. In further sessions, the established exercise intensity was modified and tested on perceptual validity. In the last exercise session rates of perceived exertion was measured on the preferred intensity level. Furthermore, participants filled in questionnaires related to physical activity, mood, craving, and impulsivity and answered choice questions on a bespoke computer task to establish discounting rates of their preferred exercise (kex), their favorite food (kfood) and a value-matching amount of money (kmoney). Results: Participants self-selected preferred speed was 5.5±2.24 km/h, at a heart rate of 120.7±23.5, and perceived exertion scale of 10.13±2.06. This shows that participants preferred a light exercise intensity with low to moderate cardiovascular strain based on perceived pleasantness. Computer assessment of discounting rates revealed that exercise was quickly discounted like a consumable reward, no significant difference between kfood and kex (kfood =0.322±0.263; kex=0.223±0.203). However, kmoney (kmoney=0.080±0.02) was significantly lower than the rates of exercise and food. Moreover, significant associations were found between preferred speed and kex (r=-0.302) and between physical activity levels and preferred speed (r=0.324). Outcomes show that participants perceived and discounted self-selected exercise like an established reward (food and money) but was discounted more like consumable rewards. Moreover, exercise discounting was quicker in individuals who preferred lower speeds, being less physically active. This may show that in a choice conflict between exercise and food the delay of exercise (because of distance) might disadvantage exercise as the chosen behavior particular in sedentary people. Conclusion: exercise can be perceived as a reward and is discounted quickly in time like food. Pleasant exercise experience is connected to low to moderate cardiovascular and perceptual strain.

Keywords: delay discounting, exercise, temporal discounting, time perspective

Procedia PDF Downloads 246
87 Micro-Oculi Facades as a Sustainable Urban Facade

Authors: Ok-Kyun Im, Kyoung Hee Kim

Abstract:

We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.

Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades

Procedia PDF Downloads 234
86 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes

Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai

Abstract:

Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.

Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)

Procedia PDF Downloads 116
85 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing

Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May

Abstract:

Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.

Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models

Procedia PDF Downloads 250
84 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 38
83 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective

Authors: Junqi Zou

Abstract:

As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.

Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system

Procedia PDF Downloads 131
82 Industrial Production of the Saudi Future Dwelling: A Saudi Volumetric Solution for Single Family Homes, Leveraging Industry 4.0 with Scalable Automation, Hybrid Structural Insulated Panels Technology and Local Materials

Authors: Bandar Alkahlan

Abstract:

The King Abdulaziz City for Science and Technology (KACST) created the Saudi Future Dwelling (SFD) initiative to identify, localize and commercialize a scalable home manufacturing technology suited to deployment across the Kingdom of Saudi Arabia (KSA). This paper outlines the journey, the creation of the international project delivery team, the product design, the selection of the process technologies, and the outcomes. A target was set to remove 85% of the construction and finishing processes from the building site as these activities could be more efficiently completed in a factory environment. Therefore, integral to the SFD initiative is the successful industrialization of the home building process using appropriate technologies, automation, robotics, and manufacturing logistics. The technologies proposed for the SFD housing system are designed to be energy efficient, economical, fit for purpose from a Saudi cultural perspective, and will minimize the use of concrete, relying mainly on locally available Saudi natural materials derived from the local resource industries. To this end, the building structure is comprised of a hybrid system of structural insulated panels (SIP), combined with a light gauge steel framework manufactured in a large format panel system. The paper traces the investigative process and steps completed by the project team during the selection process. As part of the SFD Project, a pathway was mapped out to include a proof-of-concept prototype housing module and the set-up and commissioning of a lab-factory complete with all production machinery and equipment necessary to simulate a full-scale production environment. The prototype housing module was used to validate and inform current and future product design as well as manufacturing process decisions. A description of the prototype design and manufacture is outlined along with valuable learning derived from the build and how these results were used to enhance the SFD project. The industrial engineering concepts and lab-factory detailed design and layout are described in the paper, along with the shop floor I.T. management strategy. Special attention was paid to showcase all technologies within the lab-factory as part of the engagement strategy with private investors to leverage the SFD project with large scale factories throughout the Kingdom. A detailed analysis is included in the process surrounding the design, specification, and procurement of the manufacturing machinery, equipment, and logistical manipulators required to produce the SFD housing modules. The manufacturing machinery was comprised of a combination of standardized and bespoke equipment from a wide range of international suppliers. The paper describes the selection process, pre-ordering trials and studies, and, in some cases, the requirement for additional research and development by the equipment suppliers in order to achieve the SFD objectives. A set of conclusions is drawn describing the results achieved thus far, along with a list of recommended ongoing operational tests, enhancements, research, and development aimed at achieving full-scale engagement with private sector investment and roll-out of the SFD project across the Kingdom.

Keywords: automation, dwelling, manufacturing, product design

Procedia PDF Downloads 98