Search results for: enhanced critical incident technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13721

Search results for: enhanced critical incident technique

581 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 104
580 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises

Authors: Eliane Abdo, Olivier Colot

Abstract:

SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.

Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory

Procedia PDF Downloads 91
579 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases

Authors: Rainner Roweder

Abstract:

Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.

Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil

Procedia PDF Downloads 90
578 Implementation of Hybrid Curriculum in Canadian Dental Schools to Manage Child Abuse and Neglect

Authors: Priyajeet Kaur Kaleka

Abstract:

Introduction: A dentist is often the first responder in the battle for a patient’s healthy body and maybe the first health professional to observe signs of child abuse, be it physical, emotional, and/or sexual mistreatment. Therefore, it is an ethical responsibility for the dental clinician to detect and report suspected cases of child abuse and neglect (CAN). The main reasons for not reporting suspected cases of CAN, with special emphasis on the third: 1) Uncertainty of the diagnosis, 2) Lack of knowledge of the reporting procedure, and 3) Child abuse and neglect somewhat remained the subject of ignorance among dental professionals because of a lack of advance clinical training. Given these epidemic proportions, there is a scope of further research about dental school curriculum design. Purpose: This study aimed to assess the knowledge and attitude of dentists in Canada regarding signs and symptoms of child abuse and neglect (CAN), reporting procedures, and whether educational strategies followed by dental schools address this sensitive issue. In pursuit of that aim, this abstract summarizes the evidence related to this question. Materials and Methods: Data was collected through a specially designed questionnaire adapted and modified from the author’s previous cross-sectional study on (CAN), which was conducted in Pune, India, in 2016 and is available on the database of PubMed. Design: A random sample was drawn from the targeted population of registered dentists and dental students in Canada regarding their knowledge, professional responsibilities, and behavior concerning child abuse. Questionnaire data were distributed to 200 members. Out of which, a total number of 157 subjects were in the final sample for statistical analysis, yielding response of 78.5%. Results: Despite having theoretical information on signs and symptoms, 55% of the participants indicated they are not confident to detect child physical abuse cases. 90% of respondents believed that recognition and handling the CAN cases should be a part of undergraduate training. Only 4.5% of the participants have correctly identified all signs of abuse due to inadequate formal training in dental schools and workplaces. Although nearly 96.3% agreed that it is a dentist’s legal responsibility to report CAN, only a small percentage of the participants reported an abuse case in the past. While 72% stated that the most common factor that might prevent a dentist from reporting a case was doubt over the diagnosis. Conclusion: The goal is to motivate dental schools to deal with this critical issue and provide their students with consummate training to strengthen their capability to care for and protect children. The educational institutions should make efforts to spread awareness among dental students regarding the management and tackling of CAN. Clinical Significance: There should be modifications in the dental school curriculum focusing on problem-based learning models to assist graduates to fulfill their legal and professional responsibilities. CAN literacy should be incorporated into the dental curriculum, which will eventually benefit future dentists to break this intergenerational cycle of violence.

Keywords: abuse, child abuse and neglect, dentist knowledge, dental school curriculum, problem-based learning

Procedia PDF Downloads 181
577 The Social Construction of Diagnosis: An Exploratory Study on Gender Dysphoria and Its Implications on Personal Narratives

Authors: Jessica Neri, Elena Faccio

Abstract:

In Europe, except for Denmark and Malta, the legal gender change and the stages of the possible process of gender transition are bound to the diagnosis of a gender identity disorder. The requirement of the evaluation of a mental disorder might have many implications on trans people’s self-representations, interpersonal relations in different social contexts and the therapeutic relations with clinicians during the transition. Psychopathological language may contribute to define the individual’s reality from normative presuppositions with value implications related to the dominant cultural principles. In an effort to mark the boundaries between sanity and pathology, it concurs to the definition of the management procedures of the constructed diversities and deviances, legitimizing the operational practices of particular professional figures. The aim of this research concerns the analysis of the diagnostic category of gender dysphoria contained in the last edition of the Diagnostic and Statistical Manual of Mental Disorders. In particular, this study focuses on the relationship between the implicit and explicit assumptions related to the expressions of gender non-conformity, that sustain the language and the criteria characterizing the Manual, and the possible implications on people’s narratives of transition. In order to achieve this objective two main research methods were used: historical reconstruction of the diagnostic category in the different versions of the Manual and content analysis of that category in the present version. From the historical analysis, in the medical and psychiatric field gender non-conformity has been predominantly explicated by naturalistic perspectives, naming it ‘transsexualism’ and collocating it in the category of gender identity disorder. Currently, pathological judged experiences are represented by gender dysphoria, described in the DSM-5 as the distress that may accompany the incongruence between one's experienced or expressed gender and one's assigned gender, specifying that there must be ‘evidence’ of this. Implicit theories about gender binary, parallelism between gender identity, sex and sexuality and the understanding of the mental health and the subject’s agency as subordinated to the expert knowledge, can be found in the process of designation of the category. A lack of awareness of the historical, social and political aspects connected to the cultural and normative dimensions at the basis of these implicit theories, can be noticed and data given by culture and data given by supposed -biological or psychological- nature, are often confused. This reductionist interpretation of gender and its presumed diversities legitimize the clinician to assume the role of searching and orienting, in a correctional perspective, the biographical elements that correspond to him specific expectations, with no space for other possibilities and identity configurations for people in transition. This research may contribute to the current critical debate about the epistemological foundation of the psychodiagnosis, emphasizing the pragmatic effects on the individuals and on the psychological practice in its wider social context. This work also permits to underline the risks due to the lack of awareness of the processes of social construction of the diagnostic system and its essential role of defence of the values that hold up the symbolic universe of reference.

Keywords: diagnosis, gender dysphoria, narratives, social constructionism

Procedia PDF Downloads 210
576 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer

Authors: Ruairi J. McMillan

Abstract:

Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.

Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics

Procedia PDF Downloads 45
575 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains

Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran

Abstract:

Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.

Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures

Procedia PDF Downloads 196
574 Innovative Technologies Functional Methods of Dental Research

Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko

Abstract:

Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.

Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)

Procedia PDF Downloads 259
573 The Hague Abduction Convention and the Egyptian Position: Strategizing for a Law Reform

Authors: Abdalla Ahmed Abdrabou Emam Eldeib

Abstract:

For more than a century, the Hague Conference has tackled issues in the most challenging areas of private international law, including family law. Its actions in the realm of international child abduction have been remarkable in two ways during the last two decades. First, on October 25, 1980, the Hague Convention on the Civil Aspects of International Child Abduction (the Convention) was promulgated as an unusually inventive and powerful tool. Second, the Convention is rapidly becoming more prominent in the development of international child law. By that time, overseas travel had grown more convenient, and more couples were marrying or travelling across national lines. At the same time, parental separation and divorce have increased, leading to an increase in international child custody battles. The convention they drafted avoids legal quagmires and addresses extra-legal issues well. It literally restores the kid to its place of usual residence by establishing that the youngster was unlawfully abducted from that position or, alternatively, was wrongfully kept abroad after an allowed visit. Legal custody of a child of a contested parent is usually followed by the child's abduction or unlawful relocation to another country by the non-custodial parent or other persons. If a child's custodial parent lives outside of Egypt, the youngster may be kidnapped and brought to Egypt. It's natural to ask what laws should apply and what legal norms should be followed while hearing individual instances. This study comprehensively evaluates and estimates the relevant Hague Child Abduction Convention and the current situation in Egypt and which law is applicable for child custody. In addition, this research emphasis, detail, and focus on the position of Cross-border parental child abductions in Egypt. Moreover, examine the Islamic law compared to the Hague Convention on Child Custody in detail, as well as mentioning the treatment of Islamic countries in this matter in general and Egypt's treatment of this matter in particular, as well as the criticism directed at Egypt regarding the application and implementation of child custody issues. The present research backs up this method by using non-doctrinal techniques, including surveys, interviews, and dialogues. An important objective of this research is to examine the factors that contribute to parental child abduction. In this case, family court attorneys and other interested parties serve as the target audience from whom data is collected. A survey questionnaire was developed and sent to the target population in order to collect data for future empirical testing to validate the identified critical factors on Parental Child Abduction. The main finding in this study is breaking the reservations of many Muslim countries to join the Hague Convention with regard to child custody., Likewise, clarify the problems of implementation in practice in cases of kidnapping a child from one of the parents and traveling with him outside the borders of the country. Finally, this study is to provide suggestions for reforming the current Egyptian Family Law to make it an effective and efficient for all dispute's resolution mechanism and the possibility of joining The Hague Convention.

Keywords: egyptian family law, Hague child abduction convention, child custody, cross-border parental child abductions in egypt

Procedia PDF Downloads 49
572 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 203
571 Development of Polylactic Acid Insert with a Cinnamaldehyde-Betacyclodextrin Complex for Cape Gooseberry (Physalis Peruviana L.) Packed

Authors: Gómez S. Jennifer, Méndez V. Camila, Moncayo M. Diana, Vega M. Lizeth

Abstract:

The cape gooseberry is a climacteric fruit; Colombia is one of the principal exporters in the world. The environmental condition of temperature and relative moisture decreases the titratable acidity and pH. These conditions and fruit maturation result in the fungal proliferation of Botrytis cinerea disease. Plastic packaging for fresh cape gooseberries was used for mechanical damage protection but created a suitable atmosphere for fungal growth. Beta-cyclodextrins are currently implemented as coatings for the encapsulation of hydrophobic compounds, for example, with bioactive compounds from essential oils such as cinnamaldehyde, which has a high antimicrobial capacity. However, it is a volatile substance. In this article, the casting method was used to obtain a polylactic acid (PLA) polymer film containing the beta-cyclodextrin-cinnamaldehyde inclusion complex, generating an insert that allowed the controlled release of the antifungal substance in packed cape gooseberries to decrease contamination by Botrytis cinerea in a latent state during storage. For the encapsulation technique, three ratios for the cinnamaldehyde: beta-cyclodextrin inclusion complex were proposed: (25:75), (40:60), and (50:50). Spectrophotometry, colorimetry in L*a*b* coordinate space and scanning electron microscopy (SEM) were made for the complex characterization. Subsequently, two ratios of tween and water (40:60) and (50:50) were used to obtain the polylactic acid (PLA) film. To determine mechanical and physical parameters of colourimetry in L*a*b* coordinate space, atomic force microscopy and stereoscopy were done to determine the transparency and flexibility of the film; for both cases, Statgraphics software was used to determine the best ratio in each of the proposed phases, where for encapsulation it was (50:50) with an encapsulation efficiency of 65,92%, and for casting the ratio (40:60) obtained greater transparency and flexibility that permitted its incorporation into the polymeric packaging. A liberation assay was also developed under ambient temperature conditions to evaluate the concentration of cinnamaldehyde inside the packaging through gas chromatography for three weeks. It was found that the insert had a controlled release. Nevertheless, a higher cinnamaldehyde concentration is needed to obtain the minimum inhibitory concentration for the fungus Botrytis cinerea (0.2g/L). The homogeneity of the cinnamaldehyde gas phase inside the packaging can be improved by considering other insert configurations. This development aims to impact emerging food preservation technologies with the controlled release of antifungals to reduce the affectation of the physico-chemical and sensory properties of the fruit as a result of contamination by microorganisms in the postharvest stage.

Keywords: antifungal, casting, encapsulation, postharvest

Procedia PDF Downloads 55
570 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 169
569 The Relationship between 21st Century Digital Skills and the Intention to Start a Digit Entrepreneurship

Authors: Kathrin F. Schneider, Luis Xavier Unda Galarza

Abstract:

In our modern world, few are the areas that are not permeated by digitalization: we use digital tools for work, study, entertainment, and daily life. Since technology changes rapidly, skills must adapt to the new reality, which gives a dynamic dimension to the set of skills necessary for people's academic, professional, and personal success. The concept of 21st-century digital skills, which includes skills such as collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, interpersonal skills, creativity, and productivity, have been widely discussed in the literature. Digital transformation has opened many economic opportunities for entrepreneurs for the development of their products, financing possibilities, and product distribution. One of the biggest advantages is the reduction in cost for the entrepreneur, which has opened doors not only for the entrepreneur or the entrepreneurial team but also for corporations through intrapreneurship. The development of students' general literacy level and their digital competencies is crucial for improving the effectiveness and efficiency of the learning process, as well as for students' adaptation to the constantly changing labor market. The digital economy allows a free substantial increase in the supply share of conditional and also innovative products; this is mainly achieved through 5 ways to reduce costs according to the conventional digital economy: search costs, replication, transport, tracking, and verification. Digital entrepreneurship worldwide benefits from such achievements. There is an expansion and democratization of entrepreneurship thanks to the use of digital technologies. The digital transformation that has been taking place in recent years is more challenging for developing countries, as they have fewer resources available to carry out this transformation while offering all the necessary support in terms of cybersecurity and educating their people. The degree of digitization (use of digital technology) in a country and the levels of digital literacy of its people often depend on the economic level and situation of the country. Telefónica's Digital Life Index (TIDL) scores are strongly correlated with country wealth, reflecting the greater resources that richer countries can contribute to promoting "Digital Life". According to the Digitization Index, Ecuador is in the group of "emerging countries", while Chile, Colombia, Brazil, Argentina, and Uruguay are in the group of "countries in transition". According to Herrera Espinoza et al. (2022), there are startups or digital ventures in Ecuador, especially in certain niches, but many of the ventures do not exceed six months of creation because they arise out of necessity and not out of the opportunity. However, there is a lack of relevant research, especially empirical research, to have a clearer vision. Through a self-report questionnaire, the digital skills of students will be measured in an Ecuadorian private university, according to the skills identified as the six 21st-century skills. The results will be put to the test against the variable of the intention to start a digital venture measured using the theory of planned behavior (TPB). The main hypothesis is that high digital competence is positively correlated with the intention to start digital entrepreneurship.

Keywords: new literacies, digital transformation, 21st century skills, theory of planned behavior, digital entrepreneurship

Procedia PDF Downloads 79
568 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 68
567 The Real Ambassador: How Hip Hop Culture Connects and Educates across Borders

Authors: Frederick Gooding

Abstract:

This paper explores how many Hip Hop artists have intentionally and strategically invoked sustainability principles of people, planet and profits as a means to create community, compensate for and cope with structural inequalities in society. These themes not only create community within one's country, but the powerful display and demonstration of these narratives create community on a global plane. Listeners of Hip Hop are therefore able to learn about the political events occurring in another country free of censure, and establish solidarity worldwide. Hip Hop therefore can be an ingenious tool to create self-worth, recycle positive imagery, and serve as a defense mechanism from institutional and structural forces that conspire to make an upward economic and social trajectory difficult, if not impossible for many people of color, all across the world. Although the birthplace of Hip Hop, the United States of America, is still predominately White, it has undoubtedly grown more diverse at a breath-­taking pace in recent decades. Yet, whether American mainstream media will fully reflect America’s newfound diversity remains to be seen. As it stands, American mainstream media is seen and enjoyed by diverse audiences not just in America, but all over the world. Thus, it is imperative that further inquiry is conducted about one of the fastest growing genres within one of the world’s largest and most influential media industries generating upwards of $10 billion annually. More importantly, hip hop, its music and associated culture collectively represent a shared social experience of significant value. They are important tools used both to inform and influence economic, social and political identity. Conversely, principles of American exceptionalism often prioritize American political issues over those of others, thereby rendering a myopic political view within the mainstream. This paper will therefore engage in an international contextualization of the global phenomena entitled Hip Hop by exploring the creative genius and marketing appeal of Hip Hop within the global context of information technology, political expression and social change in addition to taking a critical look at historically racialized imagery within mainstream media. Many artists the world over have been able to freely express themselves and connect with broader communities outside of their own borders, all through the sound practice of the craft of Hip Hop. An empirical understanding of political, social and economic forces within the United States will serve as a bridge for identifying and analyzing transnational themes of commonality for typically marginalized or disaffected communities facing similar struggles for survival and respect. The sharing of commonalities of marginalized cultures not only serves as a source of education outside of typically myopic, mainstream sources, but it also creates transnational bonds globally to the extent that practicing artists resonate with many of the original themes of (now mostly underground) Hip Hop as with many of the African American artists responsible for creating and fostering Hip Hop's powerful outlet of expression. Hip Hop's power of connectivity and culture-sharing transnationally across borders provides a key source of education to be taken seriously by academics.

Keywords: culture, education, global, hip hop, mainstream music, transnational

Procedia PDF Downloads 81
566 Formation of Science Literations Based on Indigenous Science Mbaru Niang Manggarai

Authors: Yuliana Wahyu, Ambros Leonangung Edu

Abstract:

The learning praxis that is proposed by 2013 Curriculum (K-13) is no longer school-oriented as a supply-driven, but now a demand-driven provider. This vision is connected with Jokowi-Kalla Nawacita program to create a competitive nation in the global era. Competition is a social fact that must be faced. Therefore the curriculum will design a process to be the innovators and entrepreneurs.To get this goal, K-13 implements the character education. This aims at creating the innovators and entrepreneurs from an early age (primary school). One part of strengthening it is literacy formations (reading, numeracy, science, ICT, finance, and culture). Thus, science literacy is an integral part of character education. The above outputs are only formed through the innovative process through intra-curricular (blended learning), co-curriculer (hands-on learning) and extra-curricular (personalized learning). Unlike the curriculums before that child cram with the theories dominating the intellectual process, new breakthroughs make natural, social, and cultural phenomena as learning sources. For example, Science in primary schoolsplaceBiology as the platform. And Science places natural, social, and cultural phenomena as a learning field so that students can learn, discover, solve concrete problems, and the prospects of development and application in their everyday lives. Science education not only learns about facts collection or natural phenomena but also methods and scientific attitudes. In turn, Science will form the science literacy. Science literacy have critical, creative, logical, and initiative competences in responding to the issues of culture, science and technology. This is linked with science nature which includes hands-on and minds-on. To sustain the effectiveness of science learning, K-13 opens a new way of viewing a contextual learning model in which facts or natural phenomena are drawn closer to the child's learning environment to be studied and analyzed scientifically. Thus, the topic of elementary science discussion is the practical and contextual things that students encounter. This research is about to contextualize Science in primary schools at Manggarai, NTT, by placing local wisdom as a learning source and media to form the science literacy. Explicitly, this study discovers the concept of science and mathematics in Mbaru Niang. Mbaru Niang is a forgotten potentials of the centralistic-theoretical mainstream curriculum so far. In fact, the traditional Manggarai community stores and inherits much of the science-mathematical indigenous sciences. In the traditional house structures are full of science and mathematics knowledge. Every details have style, sound and mathematical symbols. Learning this, students are able to collaborate and synergize the content and learning resources in student learning activities. This is constructivist contextual learning that will be applied in meaningful learning. Meaningful learning allows students to learn by doing. Students then connect topics to the context, and science literacy is constructed from their factual experiences. The research location will be conducted in Manggarai through observation, interview, and literature study.

Keywords: indigenous science, Mbaru Niang, science literacy, science

Procedia PDF Downloads 191
565 Early Biological Effects in Schoolchildren Living in an Area of Salento (Italy) with High Incidence of Chronic Respiratory Diseases: The IMP.AIR. Study

Authors: Alessandra Panico, Francesco Bagordo, Tiziana Grassi, Adele Idolo, Marcello Guido, Francesca Serio, Mattia De Giorgi, Antonella De Donno

Abstract:

In the Province of Lecce (Southeastern Italy) an area with unusual high incidence of chronic respiratory diseases, including lung cancer, was recently identified. The causes of this health emergency are still not entirely clear. In order to determine the risk profile of children living in five municipalities included in this area an epidemiological-molecular study was performed in the years 2014-2016: the IMP.AIR. (Impact of air quality on health of residents in the Municipalities of Sternatia, Galatina, Cutrofiano, Sogliano Cavour and Soleto) study. 122 children aged 6-8 years attending primary school in the study area were enrolled to evaluate the frequency of micronuclei (MNs) in their buccal exfoliated cells. The samples were collected in May 2015 by rubbing the oral mucosa with a soft bristle disposable toothbrush. At the same time, a validated questionnaire was administered to parents to obtain information about health, lifestyle and eating habits of the children. In addition, information on airborne pollutants, routinely detected by the Regional Environmental Agency (ARPA Puglia) in the study area, was acquired. A multivariate analysis was performed to detect any significant association between frequency of MNs (dependent variable) and behavioral factors (independent variables). The presence of MNs was highlighted in the buccal exfoliated cells of about 42% of recruited children with a mean frequency of 0.49 MN/1000 cells, greater than in other areas of Salento. The survey on individual characteristics and lifestyles showed that one in three children was overweight and that most of them had unhealthy eating habits with frequent consumption of foods considered ‘risky’. Moreover many parents (40% of fathers and 12% of mothers) were smokers and about 20% of them admitted to smoking in the house where the children lived. Information regarding atmospheric contaminants was poor. Of the few substances routinely detected by the only one monitoring station located in the study area (PM2.5, SO2, NO2, CO, O3) only ozone showed high concentrations exceeding the limits set by the legislation for 67 times in the year 2015. The study showed that the level of early biological effect markers in children was not negligible. This critical condition could be related to some individual factors and lifestyles such as overweight, unhealthy eating habits and exposure to passive smoking. At present, no relationship with airborne pollutants can be established due to the lack of information on many substances. Therefore, it would be advisable to modify incorrect behaviors and to intensify the monitoring of airborne pollutants (e.g. including detection of PM10, heavy metals, aromatic polycyclic hydrocarbons, benzene) given the epidemiology of chronic respiratory diseases registered in this area.

Keywords: chronic respiratory diseases, environmental pollution, lifestyle, micronuclei

Procedia PDF Downloads 184
564 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 80
563 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index

Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli

Abstract:

Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.

Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index

Procedia PDF Downloads 201
562 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 119
561 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive

Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu

Abstract:

Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.

Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings

Procedia PDF Downloads 106
560 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports

Authors: Jonardan Koner, Avinash Purandare

Abstract:

In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.

Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders

Procedia PDF Downloads 115
559 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?

Authors: Ruth Hegarty, Noel Connaughton

Abstract:

Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.

Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency

Procedia PDF Downloads 282
558 Determinants of Profit Efficiency among Poultry Egg Farmers in Ondo State, Nigeria: A Stochastic Profit Function Approach

Authors: Olufunke Olufunmilayo Ilemobayo, Barakat. O Abdulazeez

Abstract:

Profit making among poultry egg farmers has been a challenge to efficient distribution of scarce farm resources over the years, due majorly to low capital base, inefficient management, technical inefficiency, economic inefficiency, thus poultry egg production has moved into an underperformed situation, characterised by low profit margin. Though previous studies focus mainly on broiler production and efficiency of its production, however, paucity of information exist in the areas of profit efficiency in the study area. Hence, determinants of profit efficiency among poultry egg farmers in Ondo State, Nigeria were investigated. A purposive sampling technique was used to obtain primary data from poultry egg farmers in Owo and Akure local government area of Ondo State, through a well-structured questionnaire. socio-economic characteristics such as age, gender, educational level, marital status, household size, access to credit, extension contact, other variables were input and output data like flock size, cost of feeder and drinker, cost of feed, cost of labour, cost of drugs and medications, cost of energy, price of crate of table egg, price of spent layers were variables used in the study. Data were analysed using descriptive statistics, budgeting analysis, and stochastic profit function/inefficiency model. Result of the descriptive statistics shows that 52 per cent of the poultry farmers were between 31-40 years, 62 per cent were male, 90 per cent had tertiary education, 66 per cent were primarily poultry farmers, 78 per cent were original poultry farm owners and 55 per cent had more than 5 years’ work experience. Descriptive statistics on cost and returns indicated that 64 per cent of the return were from sales of egg, while the remaining 36 per cent was from sales of spent layers. The cost of feeding take the highest proportion of 69 per cent of cost of production and cost of medication the lowest (7 per cent). A positive gross margin of N5, 518,869.76, net farm income of ₦ 5, 500.446.82 and net return on investment of 0.28 indicated poultry egg production is profitable. Equipment’s cost (22.757), feeding cost (18.3437), labour cost (136.698), flock size (16.209), drug and medication cost (4.509) were factors that affecting profit efficiency, while education (-2.3143), household size (-18.4291), access to credit (-16.027), and experience (-7.277) were determinant of profit efficiency. Education, household size, access to credit and experience in poultry production were the main determinants of profit efficiency of poultry egg production in Ondo State. Other factors that affect profit efficiency were cost of feeding, cost of labour, flock size, cost of drug and medication, they positively and significantly influenced profit efficiency in Ondo State, Nigeria.

Keywords: cost and returns, economic inefficiency, profit margin, technical inefficiency

Procedia PDF Downloads 111
557 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 56
556 Nitrate Photoremoval in Water Using Nanocatalysts Based on Ag / Pt over TiO2

Authors: Ana M. Antolín, Sandra Contreras, Francesc Medina, Didier Tichit

Abstract:

Introduction: High levels of nitrates (> 50 ppm NO3-) in drinking water are potentially risky to human health. In the recent years, the trend of nitrate concentration in groundwater is rising in the EU and other countries. Conventional catalytic nitrate reduction processes into N2 and H2O lead to some toxic intermediates and by-products, such as NO2-, NH4+, and NOx gases. Alternatively, photocatalytic nitrate removal using solar irradiation and heterogeneous catalysts is a very promising and ecofriendly technique. It has been scarcely performed and more research on highly efficient catalysts is still needed. In this work, different nanocatalysts supported on Aeroxide Titania P25 (P25) have been prepared varying: 0.5-4 % wt. Ag); Pt (2, 4 % wt.); Pt precursor (H2PtCl6/K2PtCl6); and impregnation order of both metals. Pt was chosen in order to increase the selectivity to N2 and decrease that to NO2-. Catalysts were characterized by nitrogen physisorption, X-Ray diffraction, UV-visible spectroscopy, TEM and X Ray-Photoelectron Spectroscopy. The aim was to determine the influence of the composition and the preparation method of the catalysts on the conversion and selectivity in the nitrate reduction, as well as going through an overall and better understanding of the process. Nanocatalysts synthesis: For the mono and bimetallic catalysts preparation, wise-drop wetness impregnation of the precursors (AgNO3, H2PtCl6, K2PtCl6) followed by a reduction step (NaBH4) was used to obtain the metal colloids. Results and conclusions: Denitration experiments were performed in a 350 mL PTFE batch reactor under inert standard operational conditions, ultraviolet irradiations (λ=254 nm (UV-C); λ=365 nm (UV-A)), and presence/absence of hydrogen gas as a reducing agent, contrary to most studies using oxalic or formic acid. Samples were analyzed by Ionic Chromatography. Blank experiments using respectively P25 (dark conditions), hydrogen only and UV irradiations without hydrogen demonstrated a clear influence of the presence of hydrogen on nitrate reduction. Also, they demonstrated that UV irradiation increased the selectivity to N2. Interestingly, the best activity was obtained under ultraviolet lamps, especially at a closer wavelength to visible light irradiation (λ = 365 nm) and H2. 2% Ag/P25 leaded to the highest NO3- conversion among the monometallic catalysts. However, nitrite quantities have to be diminished. On the other hand, practically no nitrate conversion was observed with the monometallics based on Pt/P25. Therefore, the amount of 2% Ag was chosen for the bimetallic catalysts. Regarding the bimetallic catalysts, it is observed that the metal impregnation order, amount and Pt precursor highly affects the results. Higher selectivity to the desirable N2 gas is obtained when Pt was firstly added, especially with K2PtCl6 as Pt precursor. This suggests that when Pt is secondly added, it covers the Ag particles, which are the most active in this reaction. It could be concluded that Ag allows the nitrate reduction step to nitrite, and Pt the nitrite reduction step toward the desirable N2 gas.

Keywords: heterogeneous catalysis, hydrogenation, nanocatalyst, nitrate removal, photocatalysis

Procedia PDF Downloads 247
555 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases

Authors: Sandra Hartl, Meena Chavan

Abstract:

This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.

Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution

Procedia PDF Downloads 127
554 The Effects of Exercise Training on LDL Mediated Blood Flow in Coronary Artery Disease: A Systematic Review

Authors: Aziza Barnawi

Abstract:

Background: Regular exercise reduces risk factors associated with cardiovascular diseases. Over the past decade, exercise interventions have been introduced to reduce the risk of and prevent coronary artery disease (CAD). Elevated low-density lipoproteins (LDL) contribute to the formation of atherosclerosis, its manifestations on the endothelial narrow the coronary artery and affect the endothelial function. Therefore, flow-mediated dilation (FMD) technique is used to assess the function. The results of previous studies have been inconsistent and difficult to interpret across different types of exercise programs. The relationship between exercise therapy and lipid levels has been extensively studied, and it is known to improve the lipid profile and endothelial function. However, the effectiveness of exercise in altering LDL levels and improving blood flow is controversial. Objective: This review aims to explore the evidence and quantify the impact of exercise training on LDL levels and vascular function by FMD. Methods: Electronic databases were searched PubMed, Google Scholar, Web of Science, the Cochrane Library, and EBSCO using the keywords: “low and/or moderate aerobic training”, “blood flow”, “atherosclerosis”, “LDL mediated blood flow”, “Cardiac Rehabilitation”, “low-density lipoproteins”, “flow-mediated dilation”, “endothelial function”, “brachial artery flow-mediated dilation”, “oxidized low-density lipoproteins” and “coronary artery disease”. The studies were conducted for 6 weeks or more and influenced LDL levels and/or FMD. Studies with different intensity training and endurance training in healthy or CAD individuals were included. Results: Twenty-one randomized controlled trials (RCTs) (14 FMD and 7 LDL studies) with 776 participants (605 exercise participants and 171 control participants) met eligibility criteria and were included in the systematic review. Endurance training resulted in a greater reduction in LDL levels and their subfractions and a better FMD response. Overall, the training groups showed improved physical fitness status compared with the control groups. Participants whose exercise duration was ≥150 minutes /week had significant improvement in FMD and LDL levels compared with those with <150 minutes/week.Conclusion: In conclusion, although the relationship between physical training, LDL levels, and blood flow in CAD is complex and multifaceted, there are promising results for controlling primary and secondary prevention of CAD by exercise. Exercise training, including resistance, aerobic, and interval training, is positively correlated with improved FMD. However, the small body of evidence for LDL studies (resistance and interval training) did not prove to be significantly associated with improved blood flow. Increasing evidence suggests that exercise training is a promising adjunctive therapy to improve cardiovascular health, potentially improving blood flow and contributing to the overall management of CAD.

Keywords: exercise training, low density lipoprotein, flow mediated dilation, coronary artery disease

Procedia PDF Downloads 56
553 Re-Framing Resilience Turn in Risk and Management with Anti-Positivistic Perspective of Holling's Early Work

Authors: Jose CanIzares

Abstract:

In the last decades, resilience has received much attention in relation to understanding and managing new forms of risk, especially in the context of urban adaptation to climate change. There are abundant concerns, however, on how to best interpret resilience and related ideas, and on whether they can guide ethically appropriate risk-related or adaptation efforts. Narrative creation and framing are critical steps in shaping public discussion and policy in large-scale interventions, since they favor or inhibit early decision and interpretation habits, which can be morally sensitive and then become persistent on time. This article adds to such framing process by contesting a conventional narrative on resilience and offering an alternative one. Conventionally, present ideas on resilience are traced to the work of ecologist C. S. Holling, especially to his article Resilience and Stability in Ecosystems. This article is usually portrayed as a contribution of complex systems thinking to theoretical ecology, where Holling appeals to resilience in order to challenge received views on ecosystem stability and the diversity-stability hypothesis. In this regard, resilience is construed as a “purely scientific”, precise and descriptive concept, denoting a complex property that allows ecosystems to persist, or to maintain functions, after disturbance. Yet, these formal features of resilience supposedly changed with Holling’s later work in the 90s, where, it is argued, Holling begun to use resilience as a more pragmatic “boundary term”, aimed at unifying transdisciplinary research about risks, ecological or otherwise, and at articulating public debate and governance strategies on the issue. In the conventional story, increased vagueness and degrees of normativity are the price to pay for this conceptual shift, which has made the term more widely usable, but also incompatible with scientific purposes and morally problematic (if not completely objectionable). This paper builds on a detailed analysis of Holling’s early work to propose an alternative narrative. The study will show that the “complexity turn” has often entangled theoretical and pragmatic aims. Accordingly, Holling’s primary aim was to fight what he termed “pathologies of natural resource management” or “pathologies of command and control management”, and so, the terms of his reform of ecosystem science are partly subordinate to the details of his proposal for reforming the management sciences. As regards resilience, Holling used it as a polysemous, ambiguous and normative term: sometimes, as an instrumental value that is closely related to various stability concepts; other times, and more crucially, as an intrinsic value and a tool for attacking efficiency and instrumentalism in management. This narrative reveals the limitations of its conventional alternative and has several practical advantages. It captures well the structure and purposes of Holling’s project, and the various roles of resilience in it. It helps to link Holling’s early work with other philosophical and ideological shifts at work in the 70s. It highlights the currency of Holling’s early work for present research and action in fields such as risk and climate adaptation. And it draws attention to morally relevant aspects of resilience that the conventional narrative neglects.

Keywords: resilience, complexity turn, risk management, positivistic, framing

Procedia PDF Downloads 143
552 A Study of Lapohan Traditional Pottery Making in Selakan Island, Semporna Sabah: An Initial Framework

Authors: Norhayati Ayob, Shamsu Mohamad

Abstract:

This paper aims to provide an initial background of the process of making traditional ceramic pottery, focusing on the materials and the influence of culture heritage. Ceramic pottery is one of the hallmarks of Sabah’s heirloom, not only use as cooking and storage containers but also closely linked with folk cultures and heritage. The Bajau Laut ethnic community of Semporna or better known as the Sea Gypsies, mostly are boat dwellers and work as fishermen in the coast. This ethnic community is famous for their own artistic traditional heirloom, especially the traditional hand-made clay stove called Lapohan. It is found that in the daily life of Bajau Laut community, Lapohan (clay stove) is used to prepare the meal and as a food warmer while they are at the sea. Besides, Lapohan pottery conveys symbolic meaning of natural objects, which portrays the identity, and values of Bajau Laut community. It is acknowledged that the basic process of making potterywares was much the same for people all across the world, nevertheless, it is crucial to consider that different ethnic groups may have their own styles and choices of raw materials. Furthermore, it is still unknown why and how the Bajau Laut ethnic of Semporna get started making their own pottery and to survive until today by heavily depending on the raw materials available in Semporna. In addition, the emergent problem faced by the pottery maker in Sabah is the absence of young successor to continue the heirloom legacy. Therefore, this research aims to explore the traditional pottery making in Sabah, by investigating the background history of Lapohan pottery and to propose the classification of Lapohan based on design and motifs of traditional pottery that will be recognised throughout the study. It is postulated that different techniques and forms of making traditional pottery may produce different types of pottery in terms of surface decoration, shape, and size that portrays different cultures. This study will be conducted at Selakan Island, Semporna, which is the only location that still has Lapohan making. This study is also based on the chronological process of making pottery and taboos of the process of preparing the clay, forming, decoration technique, motif application and firing techniques. The relevant information for the study will be gathered from field study, including observation, in-depth interview and video recording. In-depth interviews will be conducted with several potters and the conversation and pottery making process will be recorded in order to understand the actual process of making Lapohan. The findings hope to provide several types of Lapohan based on different designs and cultures, for example, the one with flat-shape design or has round-shape on the top of clay stove will be labeled with suitable name based on their culture. In conclusion, it is hoped that this study will contribute to conservation for traditional pottery making in Sabah as well as to preserve their culture and heirloom for future generations.

Keywords: Bajau Laut, culture, Lapohan, traditional pottery

Procedia PDF Downloads 168