Search results for: European economic area
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15696

Search results for: European economic area

1056 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach

Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist

Abstract:

Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.

Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.

Procedia PDF Downloads 54
1055 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR

Authors: Shadan Dwairi

Abstract:

Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).

Keywords: building information modelling, employer information requirements, ontology, web-based, tool

Procedia PDF Downloads 110
1054 Articulating the Colonial Relation, a Conversation between Afropessimism and Anti-Colonialism

Authors: Thomas Compton

Abstract:

As Decolonialism becomes an important topic in Political Theory, the rupture between the colonized and the colonist relation has lost attention. Focusing on the anti-colonial activist Madhi Amel, we shall consider his attention to the permanence of the colonial relation and how it preempts Frank Wilderson’s formulation of (white) culturally necessary Anti-Black violence. Both projects draw attention away from empirical accounts of oppression, instead focusing on the structural relation which precipitates them. As Amel says that we should stop thinking of the ‘underdeveloped’ as beyond the colonial relation, Wilderson says we should stop think of the Black rights that have surpassed the role of the slave. However, Amel moves beyond his idol Althusser’s Structuralism toward a formulation of the colonial relation as source of domination. Our analysis will take a Lacanian turn in considering how this non-relation was formulated as a relation how this space of negativity became a ideological opportunity for Colonial domination. Wilderson’s work shall problematise this as we conclude with his criticisms of Structural accounts for the failure to consider how Black social death exists as more than necessity but a cite of white desire. Amel, a Lebanese activist and scholar (re)discovered by Hicham Safieddine, argues colonialism is more than the theft of land, but instead a privatization of collective property and form of investment which (re)produces the status of the capitalist in spaces ‘outside’ the market. Although Amel was a true Marxist-Leninsist, who exposited the economic determinacy of the Colonial Mode of Production, we are reading this account through Alenka Zupančič’s reformulation of the ‘invisible hand job of the market’. Amel points to the signifier ‘underdeveloped’ as buttressed on a pre-colonial epistemic break, as the Western investor (debt collector) sees the (post?) colony narcissistic image. However, the colony can never become site of class conflict, as the workers are not unified but existing between two countries. In industry, they are paid in Colonial subjectivisation, the promise of market (self)pleasure, at home, they are refugees. They are not, as Wilderson states, in the permanent social death of the slave, but they are less than the white worker. This is formulated as citizen (white), non-citizen (colonized), anti-citizen (Black/slave). Here we may also think of how indentured Indians were used as instruments of colonial violence. Wilderson’s aphorism “there is no analogy to anti-Black violence” lays bare his fundamental opposition between colonial and specifically anti-Black violence. It is not only that the debt collector, landowner, or other owners of production pleasures themselves as if their hand is invisible. The absolute negativity between colony and colonized provides a new frontier for desire, the development of a colonial mode of production. An invention inside the colonial structure that is generative of class substitution. We shall explore how Amel ignores the role of the slave but how Wilderson forecloses the history African anti-colonial.

Keywords: afropessimism, fanon, marxism, postcolonialism

Procedia PDF Downloads 131
1053 AI-Based Information System for Hygiene and Safety Management of Shared Kitchens

Authors: Jongtae Rhee, Sangkwon Han, Seungbin Ji, Junhyeong Park, Byeonghun Kim, Taekyung Kim, Byeonghyeon Jeon, Jiwoo Yang

Abstract:

The shared kitchen is a concept that transfers the value of the sharing economy to the kitchen. It is a type of kitchen equipped with cooking facilities that allows multiple companies or chefs to share time and space and use it jointly. These shared kitchens provide economic benefits and convenience, such as reduced investment costs and rent, but also increase the risk of safety management, such as cross-contamination of food ingredients. Therefore, to manage the safety of food ingredients and finished products in a shared kitchen where several entities jointly use the kitchen and handle various types of food ingredients, it is critical to manage followings: the freshness of food ingredients, user hygiene and safety and cross-contamination of cooking equipment and facilities. In this study, it propose a machine learning-based system for hygiene safety and cross-contamination management, which are highly difficult to manage. User clothing management and user access management, which are most relevant to the hygiene and safety of shared kitchens, are solved through machine learning-based methodology, and cutting board usage management, which is most relevant to cross-contamination management, is implemented as an integrated safety management system based on artificial intelligence. First, to prevent cross-contamination of food ingredients, we use images collected through a real-time camera to determine whether the food ingredients match a given cutting board based on a real-time object detection model, YOLO v7. To manage the hygiene of user clothing, we use a camera-based facial recognition model to recognize the user, and real-time object detection model to determine whether a sanitary hat and mask are worn. In addition, to manage access for users qualified to enter the shared kitchen, we utilize machine learning based signature recognition module. By comparing the pairwise distance between the contract signature and the signature at the time of entrance to the shared kitchen, access permission is determined through a pre-trained signature verification model. These machine learning-based safety management tasks are integrated into a single information system, and each result is managed in an integrated database. Through this, users are warned of safety dangers through the tablet PC installed in the shared kitchen, and managers can track the cause of the sanitary and safety accidents. As a result of system integration analysis, real-time safety management services can be continuously provided by artificial intelligence, and machine learning-based methodologies are used for integrated safety management of shared kitchens that allows dynamic contracts among various users. By solving this problem, we were able to secure the feasibility and safety of the shared kitchen business.

Keywords: artificial intelligence, food safety, information system, safety management, shared kitchen

Procedia PDF Downloads 47
1052 Gap between Knowledge and Behaviour in Recycling Domestic Solid Waste: Evidence from Manipal, India

Authors: Vidya Pratap, Seena Biju, Keshavdev A.

Abstract:

In the educational town of Manipal (located in southern India) households dispose their wastes without segregation. Mixed wastes (organic, inorganic and hazardous items) are collected either by private collectors or by the local municipal body in trucks and taken to dump yards. These collectors select certain recyclables from the collected trash and sell them to scrap merchants to earn some extra money. Rag pickers play a major role in picking up card board boxes, glass bottles and milk sachets from dump yards and public areas and scrap iron from construction sites for recycling. In keeping with the Indian Prime Minister’s mission of Swachh Bharat (A Clean India), the local municipal administration is taking efforts to ensure segregation of domestic waste at source. With this in mind, each household in a residential area in Manipal was given two buckets – for wet and dry wastes (wet waste referred to organic waste while dry waste included recyclable and hazardous items). A study was conducted in this locality covering a cluster of 145 households to assess the residents’ knowledge of recyclable, organic and hazardous items commonly disposed by households. Another objective of this research was to evaluate the extent to which the residents actually dispose their wastes appropriately. Questionnaires were self-administered to a member of each household with the assistance of individuals speaking the local language whenever needed. Respondents’ knowledge of whether an item was organic, inorganic or hazardous was captured through a questionnaire containing a list of 50 common items. Their behaviour was captured by asking how they disposed these items. Results show that more than 70% of respondents are aware that banana and orange peels, potato skin, egg shells and dried leaves are organic; similarly, more than 70% of them consider newspapers, notebook and printed paper are recyclable. Less than 65% of respondents are aware that plastic bags and covers and plastic bottles are recyclable. However, the results of the respondents’ recycling behaviour is less impressive. Fewer than 35% of respondents recycle card board boxes, milk sachets and glass bottles. Unfortunately, since plastic items like plastic bags and covers and plastic bottles are not accepted by scrap merchants, they are not recycled. This study shows that the local municipal authorities must find ways to recycle plastic into products, alternate fuel etc.

Keywords: behaviour, knowledge, plastic waste management, recyclables

Procedia PDF Downloads 161
1051 Using the Micro Computed Tomography to Study the Corrosion Behavior of Magnesium Alloy at Different pH Values

Authors: Chia-Jung Chang, Sheng-Che Chen, Ming-Long Yeh, Chih-Wei Wang, Chih-Han Chang

Abstract:

Introduction and Motivation: In recent years, magnesium alloy is used to be a kind of medical biodegradable materials. Magnesium is an essential element in the body and is efficiently excreted by the kidneys. Furthermore, the mechanical properties of magnesium alloy is closest to human bone. However, in some cases magnesium alloy corrodes so quickly that it would release hydrogen on surface of implant. The other product is hydroxide ion, it can significantly increase the local pH value. The above situations may have adverse effects on local cell functions. On the other hand, nowadays magnesium alloy corrode too fast to maintain the function of implant until the healing of tissue. Therefore, much recent research about magnesium alloy has focused on controlling the corrosion rate. The in vitro corrosion behavior of magnesium alloys is affected by many factors, and pH value is one of factors. In this study, we will study on the influence of pH value on the corrosion behavior of magnesium alloy by the Micro-CT (micro computed tomography) and other instruments.Material and methods: In the first step, we make some guiding plates for specimens of magnesium alloy AZ91 by Rapid Prototyping. The guiding plates are able to be a standard for the degradation of specimen, so that we can use it to make sure the position of specimens in the CT image. We can also simplify the conditions of degradation by the guiding plates.In the next step, we prepare the solution with different pH value. And then we put the specimens into the solution to start the corrosion test. The CT image, surface photographs and weigh are measured on every twelve hours. Results: In the primary results of the test, we make sure that CT image can be a way to quantify the corrosion behavior of magnesium alloy. Moreover we can observe the phenomenon that corrosion always start from some erosion point. It’s possibly based on some defect like dislocations and the voids with high strain energy in the materials. We will deal with the raw data into Mass Loss (ML) and corrosion rate by CT image, surface photographs and weigh in the near future. Having a simple prediction, the pH value and degradation rate will be negatively correlated. And we want to find out the equation of the pH value and corrosion rate. We also have a simple test to simulate the change of the pH value in the local region. In this test the pH value will rise to 10 in a short time. Conclusion: As a biodegradable implant for the area with stagnating body fluid flow in the human body, magnesium alloy can cause the increase of local pH values and release the hydrogen. Those may damage the human cell. The purpose of this study is finding out the equation of the pH value and corrosion rate. After that we will try to find the ways to overcome the limitations of medical magnesium alloy.

Keywords: magnesium alloy, biodegradable materials, corrosion, micro-CT

Procedia PDF Downloads 439
1050 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field

Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed

Abstract:

Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.

Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry

Procedia PDF Downloads 290
1049 Ethicality of Algorithmic Pricing and Consumers’ Resistance

Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos

Abstract:

Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.

Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality

Procedia PDF Downloads 69
1048 Protecting Human Health under International Investment Law

Authors: Qiang Ren

Abstract:

In the past 20 years, under the high standard of international investment protection, there have been numerous cases of investors ignoring the host country's measures to protect human health. Examples include investment disputes triggered by the Argentine government's measures related to human health, quality, and price of drinking water under the North American Free Trade Agreement. Examples also include Philip Morris v. Australia, in which case the Australian government announced the passing of the Plain Packing of Cigarettes Act to address the threat of smoking to public health in 2010. In order to take advantage of the investment treaty protection between Hong Kong and Australia, Philip Morris Asia acquired Philip Morris Australia in February 2011 and initiated investment arbitration under the treaty before the passage of the Act in July 2011. Philip Morris claimed the Act constitutes indirect expropriation and violation of fair and equitable treatment and claimed 4.16 billion US dollars compensation. Fortunately, the case ended at the admissibility decision stage and did not enter the substantive stage. Generally, even if the host country raises a human health defense, most arbitral tribunals will rule that the host country revoke the corresponding policy and make huge compensation in accordance with the clauses in the bilateral investment treaty to protect the rights of investors. The significant imbalance in the rights and obligations of host states and investors in international investment treaties undermines the ability of host states to act in pursuit of human health and social interests beyond economic interests. This squeeze on the nation's public policy space and disregard for the human health costs of investors' activities raises the need to include human health in investment rulemaking. The current international investment law system that emphasizes investor protection fails to fully reflect the requirements of the host country for the healthy development of human beings and even often brings negative impacts to human health. At a critical moment in the reform of the international investment law system, in order to achieve mutual enhancement of investment returns and human health development, human health should play a greater role in influencing and shaping international investment rules. International investment agreements should not be limited to investment protection tools but should also be part of national development strategies to serve sustainable development and human health. In order to meet the requirements of the new sustainable development goals of the United Nations, human health should be emphasized in the formulation of international investment rules, and efforts should be made to shape a new generation of international investment rules that meet the requirements of human health and sustainable development.

Keywords: human health, international investment law, Philip Morris v. Australia, investor protection

Procedia PDF Downloads 156
1047 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 388
1046 Effect of Accelerated Aging on Antibacterial and Mechanical Properties of SEBS Compounds

Authors: Douglas N. Simoes, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana

Abstract:

Thermoplastic elastomers (TPE) compounds are used in a wide range of applications, like home appliances, automotive components, medical devices, footwear, and others. These materials are susceptible to microbial attack, causing a crack in polymer chains compounds based on SEBS copolymers, poly (styrene-b-(ethylene-co-butylene)-b-styrene, are a class of TPE, largely used in domestic appliances like refrigerator seals (gaskets), bath mats and sink squeegee. Moisture present in some areas (such as shower area and sink) in addition to organic matter provides favorable conditions for microbial survival and proliferation, contributing to the spread of diseases besides the reduction of product life cycle due the biodegradation process. Zinc oxide (ZnO) has been studied as an alternative antibacterial additive due its biocidal effect. It is important to know the influence of these additives in the properties of the compounds, both at the beginning and during the life cycle. In that sense, the aim of this study was to evaluate the effect of accelerated aging in oven on antibacterial and mechanical properties of ZnO loaded SEBS based TPE compounds. Two different comercial zinc oxide, named as WR and Pe were used in proportion of 1%. A compound with no antimicrobial additive (standard) was also tested. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials, screw rotation rate was set at 226 rpm, with a temperature profile from 150 to 190 ºC. Test specimens were prepared using the injection molding machine at 190 ºC. The Standard Test Method for Rubber Property—Effect of Liquids was applied in order to simulate the exposition of TPE samples to detergent ingredients during service. For this purpose, ZnO loaded TPE samples were immersed in a 3.0% w/v detergent (neutral) and accelerated aging in oven at 70°C for 7 days. Compounds were characterized by changes in mechanical (hardness and tension properties) and mass. The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The microbiological tests showed a reduction up to 42% in E. coli and up to 49% in S. aureus population in non-aged samples. There were observed variations in elongation and hardness values with the addition of zinc The changes in tensile at rupture and mass were not significant between non-aged and aged samples.

Keywords: antimicrobial, domestic appliance, sebs, zinc oxide

Procedia PDF Downloads 232
1045 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 202
1044 Modeling the Relation between Discretionary Accrual Earnings Management, International Financial Reporting Standards and Corporate Governance

Authors: Ikechukwu Ndu

Abstract:

This study examines the econometric modeling of the relation between discretionary accrual earnings management, International Financial Reporting Standards (IFRS), and certain corporate governance factors with regard to listed Nigerian non-financial firms. Although discretionary accrual earnings management is a well-known and global problem that has an adverse impact on users of the financial statements, its relationship with IFRS and corporate governance is neither adequately researched nor properly systematically investigated in Nigeria. The dearth of research in the relation between discretionary accrual earnings management, IFRS and corporate governance in Nigeria has made it difficult for academics, practitioners, government setting bodies, regulators and international bodies to achieve a clearer understanding of how discretionary accrual earnings management relates to IFRS and certain corporate governance characteristics. This is the first study to the author’s best knowledge to date that makes interesting research contributions that significantly add to the literature of discretionary accrual earnings management and its relation with corporate governance and IFRS pertaining to the Nigerian context. A comprehensive review is undertaken of the literature of discretionary total accrual earnings management, IFRS, and certain corporate governance characteristics as well as the data, models, methodologies, and different estimators used in the study. Secondary financial statement, IFRS, and corporate governance data are sourced from Bloomberg database and published financial statements of Nigerian non-financial firms for the period 2004 to 2016. The methodology uses both the total and working capital accrual basis. This study has a number of interesting preliminary findings. First, there is a negative relationship between the level of discretionary accrual earnings management and the adoption of IFRS. However, this relationship does not appear to be statistically significant. Second, there is a significant negative relationship between the size of the board of directors and discretionary accrual earnings management. Third, CEO Separation of roles does not constrain earnings management, indicating the need to preserve relationships, personal connections, and maintain bonded friendships between the CEO, Chairman, and executive directors. Fourth, there is a significant negative relationship between discretionary accrual earnings management and the use of a Big Four firm as an auditor. Fifth, including shareholders in the audit committee, leads to a reduction in discretionary accrual earnings management. Sixth, the debt and return on assets (ROA) variables are significant and positively related to discretionary accrual earnings management. Finally, the company size variable indicated by the log of assets is surprisingly not found to be statistically significant and indicates that all Nigerian companies irrespective of size engage in discretionary accrual management. In conclusion, this study provides key insights that enable a better understanding of the relationship between discretionary accrual earnings management, IFRS, and corporate governance in the Nigerian context. It is expected that the results of this study will be of interest to academics, practitioners, regulators, governments, international bodies and other parties involved in policy setting and economic development in areas of financial reporting, securities regulation, accounting harmonization, and corporate governance.

Keywords: discretionary accrual earnings management, earnings manipulation, IFRS, corporate governance

Procedia PDF Downloads 119
1043 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 167
1042 Carbon Sequestration in Spatio-Temporal Vegetation Dynamics

Authors: Nothando Gwazani, K. R. Marembo

Abstract:

An increase in the atmospheric concentration of carbon dioxide (CO₂) from fossil fuel and land use change necessitates identification of strategies for mitigating threats associated with global warming. Oceans are insufficient to offset the accelerating rate of carbon emission. However, the challenges of oceans as a source of reducing carbon footprint can be effectively overcome by the storage of carbon in terrestrial carbon sinks. The gases with special optical properties that are responsible for climate warming include carbon dioxide (CO₂), water vapors, methane (CH₄), nitrous oxide (N₂O), nitrogen oxides (NOₓ), stratospheric ozone (O₃), carbon monoxide (CO) and chlorofluorocarbons (CFC’s). Amongst these, CO₂ plays a crucial role as it contributes to 50% of the total greenhouse effect and has been linked to climate change. Because plants act as carbon sinks, interest in terrestrial carbon sequestration has increased in an effort to explore opportunities for climate change mitigation. Removal of carbon from the atmosphere is a topical issue that addresses one important aspect of an overall strategy for carbon management namely to help mitigate the increasing emissions of CO₂. Thus, terrestrial ecosystems have gained importance for their potential to sequester carbon and reduce carbon sink in oceans, which have a substantial impact on the ocean species. Field data and electromagnetic spectrum bands were analyzed using ArcGIS 10.2, QGIS 2.8 and ERDAS IMAGINE 2015 to examine the vegetation distribution. Satellite remote sensing data coupled with Normalized Difference Vegetation Index (NDVI) was employed to assess future potential changes in vegetation distributions in Eastern Cape Province of South Africa. The observed 5-year interval analysis examines the amount of carbon absorbed using vegetation distribution. In 2015, the numerical results showed low vegetation distribution, therefore increased the acidity of the oceans and gravely affected fish species and corals. The outcomes suggest that the study area could be effectively utilized for carbon sequestration so as to mitigate ocean acidification. The vegetation changes measured through this investigation suggest an environmental shift and reduced vegetation carbon sink, and that threatens biodiversity and ecosystem. In order to sustain the amount of carbon in the terrestrial ecosystems, the identified ecological factors should be enhanced through the application of good land and forest management practices. This will increase the carbon stock of terrestrial ecosystems thereby reducing direct loss to the atmosphere.

Keywords: remote sensing, vegetation dynamics, carbon sequestration, terrestrial carbon sink

Procedia PDF Downloads 132
1041 Disaster Capitalism, Charter Schools, and the Reproduction of Inequality in Poor, Disabled Students: An Ethnographic Case Study

Authors: Sylvia Mac

Abstract:

This ethnographic case study examines disaster capitalism, neoliberal market-based school reforms, and disability through the lens of Disability Studies in Education. More specifically, it explores neoliberalism and special education at a small, urban charter school in a large city in California and the (re)production of social inequality. The study uses Sociology of Special Education to examine the ways in which special education is used to sort and stratify disabled students. At a time when rhetoric surrounding public schools is framed in catastrophic and dismal language in order to justify the privatization of public education, small urban charter schools must be examined to learn if they are living up to their promise or acting as another way to maintain economic and racial segregation. The study concludes that neoliberal contexts threaten successful inclusive education and normalize poor, disabled students’ continued low achievement and poor post-secondary outcomes. This ethnographic case study took place at a small urban charter school in a large city in California. Participants included three special education students, the special education teacher, the special education assistant, a regular education teacher, and the two founders and charter writers. The school claimed to have a push-in model of special education where all special education students were fully included in the general education classroom. Although presented as fully inclusive, some special education students also attended a pull-out class called Study Skills. The study found that inclusion and neoliberalism are differing ideologies that cannot co-exist. Successful inclusive environments cannot thrive while under the influences of neoliberal education policies such as efficiency and cost-cutting. Additionally, the push for students to join the global knowledge economy means that more and more low attainers are further marginalized and kept in poverty. At this school, neoliberal ideology eclipsed the promise of inclusive education for special education students. This case study has shown the need for inclusive education to be interrogated through lenses that consider macro factors, such as neoliberal ideology in public education, as well as the emerging global knowledge economy and increasing income inequality. Barriers to inclusion inside the school, such as teachers’ attitudes, teacher preparedness, and school infrastructure paint only part of the picture. Inclusive education is also threatened by neoliberal ideology that shifts the responsibility from the state to the individual. This ideology is dangerous because it reifies the stereotypes of disabled students as lazy, needs drains on already dwindling budgets. If these stereotypes persist, inclusive education will have a difficult time succeeding. In order to more fully examine the ways in which inclusive education can become truly emancipatory, we need more analysis on the relationship between neoliberalism, disability, and special education.

Keywords: case study, disaster capitalism, inclusive education, neoliberalism

Procedia PDF Downloads 201
1040 “Divorced Women are Like Second-Hand Clothes” - Hate Language in Media Discourse

Authors: Sopio Totibadze

Abstract:

Although the legal framework of Georgia reflects the main principles of gender equality and is in line with the international situation, Georgia remains a male-dominated society. This means that men prevail in many areas of social, economic, and political life, which frequently gives women a subordinate status in society and the family. According to the latest studies, “violence against women and girls in Georgia is also recognized as a public problem, and it is necessary to focus on it”. Moreover, the Public Defender's report (2019) reveals that “in the last five years, 151 women were killed in Georgia due to gender and family violence”. Unfortunately, there are frequent cases of crimes based on gender-based oppression in Georgia, which pose a threat not only to women but also to people of any gender whose desires and aspirations do not correspond to the gender norms and roles prevailing in society. It is well-known that language is often used as a tool for gender oppression. Therefore, feminist and gender studies in linguistics ultimately serve to represent the problem, reflect on it, and propose ways to solve it. Together with technical advancement in communication, a new form of discrimination has arisen- hate language against women in electronic media discourse. Due to the nature of social media and the internet, messages containing hate language can spread in seconds and reach millions of people. However, only a few know about the detrimental effects they may have on the addressee and society. This paper aims to analyse the hateful comments directed at women on various media platforms to determine the linguistic strategies used while attacking women and the reasons why women may fall victim to this type of hate language. The data have been collected over six months, and overall, 500 comments will be examined for the paper. Qualitative and quantitative analysis was chosen for the methodology of the study. The comments posted on various media platforms have been selected manually due to several reasons, the most important being the problem of identifying hate speech as it can disguise itself in different ways- humour, memes, etc. The comments on the articles, posts, pictures, and videos selected for sociolinguistic analysis depict a woman, a taboo topic, or a scandalous event centred on a woman that triggered hate language towards the person to whom the post/article was dedicated. The study has revealed that a woman can become a victim of hatred directed at them if they do something considered to be a deviation from a societal norm, namely, get a divorce, be sexually active, be vocal about feministic values, and talk about taboos. Interestingly, people who utilize hate language are not only men trying to “normalize” the prejudiced patriarchal values but also women who are equally active in bringing down a "strong" woman. The paper also aims to raise awareness about the hate language directed at women, as being knowledgeable about the issue at hand is the first step to tackling it.

Keywords: femicide, hate language, media discourse, sociolinguistics

Procedia PDF Downloads 67
1039 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 22
1038 Large-Scale Production of High-Performance Fiber-Metal-Laminates by Prepreg-Press-Technology

Authors: Christian Lauter, Corin Reuter, Shuang Wu, Thomas Troester

Abstract:

Lightweight construction became more and more important over the last decades in several applications, e.g. in the automotive or aircraft sector. This is the result of economic and ecological constraints on the one hand and increasing safety and comfort requirements on the other hand. In the field of lightweight design, different approaches are used due to specific requirements towards the technical systems. The use of endless carbon fiber reinforced plastics (CFRP) offers the largest weight saving potential of sometimes more than 50% compared to conventional metal-constructions. However, there are very limited industrial applications because of the cost-intensive manufacturing of the fibers and production technologies. Other disadvantages of pure CFRP-structures affect the quality control or the damage resistance. One approach to meet these challenges is hybrid materials. This means CFRP and sheet metal are combined on a material level. Therefore, new opportunities for innovative process routes are realizable. Hybrid lightweight design results in lower costs due to an optimized material utilization and the possibility to integrate the structures in already existing production processes of automobile manufacturers. In recent and current research, the advantages of two-layered hybrid materials have been pointed out, i.e. the possibility to realize structures with tailored mechanical properties or to divide the curing cycle of the epoxy resin into two steps. Current research work at the Chair for Automotive Lightweight Design (LiA) at the Paderborn University focusses on production processes for fiber-metal-laminates. The aim of this work is the development and qualification of a large-scale production process for high-performance fiber-metal-laminates (FML) for industrial applications in the automotive or aircraft sector. Therefore, the prepreg-press-technology is used, in which pre-impregnated carbon fibers and sheet metals are formed and cured in a closed, heated mold. The investigations focus e.g. on the realization of short process chains and cycle times, on the reduction of time-consuming manual process steps, and the reduction of material costs. This paper gives an overview over the considerable steps of the production process in the beginning. Afterwards experimental results are discussed. This part concentrates on the influence of different process parameters on the mechanical properties, the laminate quality and the identification of process limits. Concluding the advantages of this technology compared to conventional FML-production-processes and other lightweight design approaches are carried out.

Keywords: composite material, fiber-metal-laminate, lightweight construction, prepreg-press-technology, large-series production

Procedia PDF Downloads 223
1037 Study of the Impact of Synthesis Method and Chemical Composition on Photocatalytic Properties of Cobalt Ferrite Catalysts

Authors: Katerina Zaharieva, Vicente Rives, Martin Tsvetkov, Raquel Trujillano, Boris Kunev, Ivan Mitov, Maria Milanova, Zara Cherkezova-Zheleva

Abstract:

The nanostructured cobalt ferrite-type materials Sample A - Co0.25Fe2.75O4, Sample B - Co0.5Fe2.5O4, and Sample C - CoFe2O4 were prepared by co-precipitation in our previous investigations. The co-precipitated Sample B and Sample C were mechanochemically activated in order to produce Sample D - Co0.5Fe2.5O4 and Sample E- CoFe2O4. The PXRD, Moessbauer and FTIR spectroscopies, specific surface area determination by the BET method, thermal analysis, element chemical analysis and temperature-programmed reduction were used to investigate the prepared nano-sized samples. The changes of the Malachite green dye concentration during reaction of the photocatalytic decolorization using nanostructured cobalt ferrite-type catalysts with different chemical composition are included. The photocatalytic results show that the increase in the degree of incorporation of cobalt ions in the magnetite host structure for co-precipitated cobalt ferrite-type samples results in an increase of the photocatalytic activity: Sample A (4 х10-3 min-1) < Sample B (5 х10-3 min-1) < Sample C (7 х10-3 min-1). Mechanochemically activated photocatalysts showed a higher activity than the co-precipitated ferrite materials: Sample D (16 х10-3 min-1) > Sample E (14 х10-3 min-1) > Sample C (7 х10-3 min-1) > Sample B (5 х10-3 min-1) > Sample A (4 х10-3 min-1). On decreasing the degree of substitution of iron ions by cobalt ones a higher sorption ability of the dye after the dark period for the co-precipitated cobalt ferrite materials was observed: Sample C (72 %) < Sample B (78 %) < Sample A (80 %). Mechanochemically treated ferrite catalysts and co-precipitated Sample B possess similar sorption capacities, Sample D (78 %) ~ Sample E (78 %) ~ Sample B (78 %). The prepared nano-sized cobalt ferrite-type materials demonstrate good photocatalytic and sorption properties. Mechanochemically activated Sample D - Co0.5Fe2.5O4 (16х10-3 min-1) and Sample E-CoFe2O4 (14х10-3 min-1) possess higher photocatalytic activity than that of the most common used UV-light catalyst Degussa P25 (12х10-3 min-1). The dependence of the photo-catalytic activity and sorption properties on the preparation method and different degree of substitution of iron ions by cobalt ions in synthesized cobalt ferrite samples is established. The mechanochemical activation leads to formation of nano-structured cobalt ferrite-type catalysts (Sample D and Sample E) with higher rate constants than those of the ferrite materials (Sample A, Sample B, and Sample C) prepared by the co-precipitation procedure. The increase in the degree of substitution of iron ions by cobalt ones leads to improved photocatalytic properties and lower sorption capacities of the co-precipitated ferrite samples. The good sorption properties between 72 and 80% of the prepared ferrite-type materials show that they could be used as potential cheap absorbents for purification of polluted waters.

Keywords: nanodimensional cobalt ferrites, photocatalyst, synthesis, mechanochemical activation

Procedia PDF Downloads 251
1036 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 99
1035 Proposed Design Principles for Low-Income Housing in South Africa

Authors: Gerald Steyn

Abstract:

Despite the huge number of identical, tiny, boxy, freestanding houses built by the South African government after the advent of democracy in 1994, squatter camps continue to mushroom, and there is no evidence that the backlog is being reduced. Not only is the wasteful low-density detached-unit approach of the past being perpetuated, but the social, spatial, and economic marginalization is worse than before 1994. The situation is precarious since squatters are vulnerable to fires and flooding. At the same time, the occupants of the housing schemes are trapped far from employment opportunities or any public amenities. Despite these insecurities, the architectural, urban design, and city planning professions are puzzlingly quiet. Design projects address these issues only at the universities, albeit inevitably with somewhat Utopian notions. Geoffrey Payne, the renowned urban housing and urban development consultant and researcher focusing on issues in the Global South, once proclaimed that “we do not have a housing problem – we have a settlement problem.” This dictum was used as the guiding philosophy to conceptualize urban design and architectural principles that foreground the needs of low-income households and allow them to be fully integrated into the larger conurbation. Information was derived from intensive research over two decades, involving frequent visits to informal settlements, historic Black townships, and rural villages. Observations, measured site surveys, and interviews resulted in several scholarly articles from which a set of desirable urban and architectural criteria could be extracted. To formulate culturally appropriate design principles, existing vernacular and informal patterns were analyzed, reconciled with contemporary designs that align with the requirements for the envisaged settlement attributes, and reimagined as residential design principles. Five interrelated design principles are proposed, ranging in scale from (1) Integrating informal settlements into the city, (2) linear neighborhoods, (3) market streets as wards, (4) linear neighborhoods, and (5) typologies and densities for clustered and aggregated patios and courtyards. Each design principle is described, first in terms of its context and associated issues of concern, followed by a discussion of the patterns available to inform a possible solution, and finally, an explanation and graphic illustration of the proposed design. The approach is predominantly bottom-up since each of the five principles is unfolded from existing informal and vernacular practices studied in situ. They are, however, articulated and represented in terms of contemporary design language. Contrary to an idealized vision of housing for South Africa’s low-income urban households, this study proposes actual principles for critical assessment by peers in the tradition of architectural research in design.

Keywords: culturally appropriate design principles, informal settlements, South Africa’s housing backlog, squatter camps

Procedia PDF Downloads 29
1034 Barrier Membrane Influence Histology of Guided Bone Regenerations: A Systematic Review and Meta-Analysis

Authors: Laura Canagueral-Pellice, Antonio Munar-Frau, Adaia Valls-Ontanon, Joao Carames, Federico Hernandez-Alfaro, Jordi Caballe-Serrano

Abstract:

Objective: Guided bone regeneration (GBR) aims to replace the missing bone with a new structure to achieve long-term stability of rehabilitations. The aim of the present systematic review and meta-analysis is to determine the effect of barrier membranes on histological outcomes after GBR procedures. Moreover, the effect of the grafting material and tissue gain were analyzed. Materials & methods: Two independent reviewers performed an electronic search in Pubmed and Scopus, identifying all eligible publications up to March 2020. Only randomized controlled trials (RCTs) assessing a histological analysis of augmented areas were included. Results: A total of 6 publications were included for the present systematic review. A total of 110 biopsied sites were analysed; 10 corresponded to vertical bone augmentation procedures, whereas 100 analysed horizontal regeneration procedures. A mean tissue gain of 3 ± 1.48mm was obtained for horizontal defects. Histological assessment of new bone formation, residual particle and sub-epithelial connective tissue (SCT) was reported. The four main barrier membranes used were natural collagen membranes, e-PTFE, polylactic resorbable membranes and acellular dermal matrix membranes (AMDG). The analysis demonstrated that resorbable membranes result in higher values of new bone formation and lower values of residual particles and SCT. Xenograft resulted in lower new bone formation compared to allograft; however, no statistically significant differences were observed regarding residual particle and SCT. Overall, regeneration procedures adding autogenous bone, plasma derivate or growth factors achieved in general greater new bone formation and tissue gain. Conclusions: There is limited evidence favoring the effect of a certain type of barrier membrane in GBR. Data needs to be evaluated carefully; however, resorbable membranes are correlated with greater new bone formation values, especially when combined with allograft materials and/or the addition of autogenous bone, platelet reach plasma (PRP) or growth factors in the regeneration area. More studies assessing the histological outcomes of different GBR protocols and procedures testing different biomaterials are needed to maximize the clinical and histological outcomes in bone regeneration science.

Keywords: barrier membrane, graft material, guided bone regeneration, implant surgery, histology

Procedia PDF Downloads 134
1033 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality

Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard

Abstract:

Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.

Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus

Procedia PDF Downloads 558
1032 Distribution and Ecological Risk Assessment of Trace Elements in Sediments along the Ganges River Estuary, India

Authors: Priyanka Mondal, Santosh K. Sarkar

Abstract:

The present study investigated the spatiotemporal distribution and ecological risk assessment of trace elements of surface sediments (top 0 - 5 cm; grain size ≤ 0.63 µm) in relevance to sediment quality characteristics along the Ganges River Estuary, India. Sediment samples were collected during ebb tide from intertidal regions covering seven sampling sites of diverse environmental stresses. The elements were analyzed with the help of ICPAES. This positive, mixohaline, macro-tidal estuary has global significance contributing ecological and economic services. Presence of fine-clayey particle (47.03%) enhances the adsorption as well as transportation of trace elements. There is a remarkable inter-metallic variation (mg kg-1 dry weight) in the distribution pattern in the following manner: Al (31801± 15943) > Fe (23337± 7584) > Mn (461±147) > S(381±235) > Zn(54 ±18) > V(43 ±14) > Cr(39 ±15) > As (34±15) > Cu(27 ±11) > Ni (24 ±9) > Se (17 ±8) > Co(11 ±3) > Mo(10 ± 2) > Hg(0.02 ±0.01). An overall trend of enrichment of majority of trace elements was very much pronounced at the site Lot 8, ~ 35km upstream of the estuarine mouth. In contrast, the minimum concentration was recorded at site Gangasagar, mouth of the estuary, with high energy profile. The prevalent variations in trace element distribution are being liable for a set of cumulative factors such as hydrodynamic conditions, sediment dispersion pattern and textural variations as well as non-homogenous input of contaminants from point and non-point sources. In order to gain insight into the trace elements distribution, accumulation, and their pollution status, geoaccumulation index (Igeo) and enrichment factor (EF) were used. The Igeo indicated that surface sediments were moderately polluted with As (0.60) and Mo (1.30) and strongly contaminated with Se (4.0). The EF indicated severe pollution of Se (53.82) and significant pollution of As (4.05) and Mo (6.0) and indicated the influx of As, Mo and Se in sediments from anthropogenic sources (such as industrial and municipal sewage, atmospheric deposition, agricultural run-off, etc.). The significant role of the megacity Calcutta in relevance to the untreated sewage discharge, atmospheric inputs and other anthropogenic activities is worthwhile to mention. The ecological risk for different trace elements was evaluated using sediment quality guidelines, effects range low (ERL), and effect range median (ERM). The concentration of As, Cu and Ni at 100%, 43% and 86% of the sampling sites has exceeded the ERL value while none of the element concentration exceeded ERM. The potential ecological risk index values revealed that As at 14.3% of the sampling sites would pose relatively moderate risk to benthic organisms. The effective role of finer clay particles for trace element distribution was revealed by multivariate analysis. The authors strongly recommend regular monitoring emphasizing on accurate appraisal of the potential risk of trace elements for effective and sustainable management of this estuarine environment.

Keywords: pollution assessment, sediment contamination, sediment quality, trace elements

Procedia PDF Downloads 242
1031 Exploring the Social Health and Well-Being Factors of Hydraulic Fracturing

Authors: S. Grinnell

Abstract:

A PhD Research Project exploring the Social Health and Well-Being Impacts associated with Hydraulic Fracturing, with an aim to produce a Best Practice Support Guidance for those anticipating dealing with planning applications or submitting Environmental Impact Assessments (EIAs). Amid a possible global energy crisis, founded upon a number of factors, including unstable political situations, increasing world population growth, people living longer, it is perhaps inevitable that Hydraulic Fracturing (commonly referred to as ‘fracking’) will become a major player within the global long-term energy and sustainability agenda. As there is currently no best practice guidance for governing bodies the Best Practice Support Document will be targeted at a number of audiences including, consultants undertaking EIAs, Planning Officers, those commissioning EIAs Industry and interested public stakeholders. It will offer a robust, evidence-based criteria and recommendations which provide a clear narrative and consistent and shared approach to the language used along with containing an understanding of the issues identified. It is proposed that the Best Practice Support Document will also support the mitigation of health impacts identified. The Best Practice Support Document will support the newly amended Environmental Impact Assessment Directive (2011/92/EU), to be transposed into UK law by 2017. A significant amendment introduced focuses on, ‘higher level of protection to the environment and health.’ Methodology: A qualitative research methods approach is being taken with this research. It will have a number of key stages. A literature review has been undertaken and been critically reviewed and analysed. This was followed by a descriptive content analysis of a selection of international and national policies, programmes and strategies along with published Environmental Impact Assessments and associated planning guidance. In terms of data collection, a number of stakeholders were interviewed as well as a number of focus groups of local community groups potentially affected by fracking. These were determined from across the UK. A theme analysis of all the data collected and the literature review will be undertaken, using NVivo. Best Practice Supporting Document will be developed based on the outcomes of the analysis and be tested and piloted in the professional fields, before a live launch. Concluding statement: Whilst fracking is not a new concept, the technology is now driving a new force behind the use of this engineering to supply fuels. A number of countries have pledged moratoria on fracking until further investigation from the impacts on health have been explored, whilst other countries including Poland and the UK are pushing to support the use of fracking. If this should be the case, it will be important that the public’s concerns, perceptions, fears and objections regarding the wider social health and well-being impacts are considered along with the more traditional biomedical health impacts.

Keywords: fracking, hydraulic fracturing, socio-economic health, well-being

Procedia PDF Downloads 222
1030 Internationalization Process Model for Construction Firms: Stages and Strategies

Authors: S. Ping Ho, R. Dahal

Abstract:

The global economy has drastically changed how firms operate and compete. Although the construction industry is ‘local’ by its nature, the internationalization of the construction industry has become an inevitable reality. As a result of global competition, staying domestic is no longer safe from competition and, on the contrary, to grow and become an MNE (multi-national enterprise) becomes one of the important strategies for a firm to survive in the global competition. For the successful entrance into competing markets, the firms need to re-define their competitive advantages and re-identify the sources of the competitive advantages. A firm’s initiation of internationalization is not necessarily a result of strategic planning but also involves certain idiosyncratic events that pave the path leading to a firm’s internationalization. For example, a local firm’s incidental or unintentional collaboration with an MNE can become the initiating point of its internationalization process. However, because of the intensive competition in today’s global movement, many firms were compelled to initiate their internationalization as a strategic response to the competition. Understandingly stepping in in the process of internationalization and appropriately implementing the strategies (in the process) at different stages lead the construction firms to a successful internationalization journey. This study is carried out to develop a model of the internationalization process, which derives appropriate strategies that the construction firms can implement at each stage. The proposed model integrates two major and complementary views of internationalization and expresses the dynamic process of internationalization in three stages, which are the pre-international (PRE) stage, the foreign direct investment (FDI) stage, and the multi-national enterprise (MNE) stage. The strategies implied in the proposed model are derived, focusing on capability building, market locations, and entry modes based on the resource-based views: value, rareness, imitability, and substitutability (VRIN). With the proposed dynamic process model the potential construction firms which are willing to expand their business market area can be benefitted. Strategies for internationalization, such as core competence strategy, market selection, partner selection, and entry mode strategy, can be derived from the proposed model. The internationalization process is expressed in two different forms. First, we discuss the construction internationalization process, identify the driving factor/s of the process, and explain the strategy formation in the process. Second, we define the stages of internationalization along the process and the corresponding strategies in each stage. The strategies may include how to exploit existing advantages for the competition at the current stage and develop or explore additional advantages appropriate for the next stage. Particularly, the additionally developed advantages will then be accumulated and drive forward the firm’s stage of internationalization, which will further determine the subsequent strategies, and so on and so forth, spiraling up the stages of a higher degree of internationalization. However, the formation of additional strategies for the next stage does not happen automatically, and the strategy evolution is based on the firm’s dynamic capabilities.

Keywords: construction industry, dynamic capabilities, internationalization process, internationalization strategies, strategic management

Procedia PDF Downloads 41
1029 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 72
1028 The Political Economy of the Global Climate Change Adaptation Initiatives: A Case Study on the Global Environmental Facility

Authors: Anar Koli

Abstract:

After the Paris agreement in 2015, a comprehensive initiative both from the developed and developing countries towards the adaptation to climate change is emerging. The Global Environmental Facility (GEF), which is financing a global portfolio of adaptation projects and programs in over 124 countries is playing a significant role to a new financing framework that included the concept of “climate-resilient development”. However, both the adaptation and sustainable development paradigms remain continuously contested, especially the role of the multilateral institutions with their technical and financial assistance to the developing world. Focusing on the adaptation initiatives of the GEF, this study aims to understand to what extent the global multilateral institutions, particularly the GEF is contributing to the climate-resilient development. From the political ecology perspective, the argument of this study is that the global financial framework is highly politicized, and understanding the contribution of the global institutions of the global climate change needs to be related both from the response and causal perspectives. A holistic perspective, which includes the contribution of the GEF as a response to the climate change and as well the cause of global climate change, are needed to understand the broader environment- political economic relation. The study intends to make a critical analysis of the way in which the political economy structure and the environment are related along with the social and ecological implications. It does not provide a narrow description of institutional responses to climate change, rather it looks at how the global institutions are influencing the relationship of the global ecologies and economies. This study thus developed a framework combining the global governance and the political economy perspective. This framework includes environment-society relation, environment-political economy linkage, global institutions as the orchestra, and division between the North and the South. Through the analysis of the GEF as the orchestra of the global governance, this study helps to understand how GEF is coordinating the interactions between the North and the South and responding the global climate resilient development. Through the other components of the framework, the study explains how the role of the global institutions is related to the cause of the human induced global climate change. The study employs a case study based on both the quantitative and qualitative data. Along with the GEF reports and data sets, this study draws from an eclectic range of literature from a range of disciplines to explain the broader relation of the environment and political economy. Based on a case study on GEF, the study found that the GEF has positive contributions in bringing developing countries’ capacity in terms of sustainable development goal, local institutional development. However, through a critical holistic analysis, this study found that this contribution to the resilient development helps the developing countries to conform the fossil fuel based capitalist political economy. The global governance institution is contributing both to the pro market based environment society relation and, to the consequences of this relation.

Keywords: climate change adaptation, global environmental facility (GEF), political economy, the north -south relation

Procedia PDF Downloads 211
1027 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 154