Search results for: analytic hierarchy processes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6311

Search results for: analytic hierarchy processes

71 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 140
70 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 218
69 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 181
68 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 142
67 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials

Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco

Abstract:

Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.

Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites

Procedia PDF Downloads 268
66 Adapting to College: Exploration of Psychological Well-Being, Coping, and Identity as Markers of Readiness

Authors: Marit D. Murry, Amy K. Marks

Abstract:

The transition to college is a critical period that affords abundant opportunities for growth in conjunction with novel challenges for emerging adults. During this time, emerging adults are garnering experiences and acquiring hosts of new information that they are required to synthesize and use to inform life-shaping decisions. This stage is characterized by instability and exploration, which necessitates a diverse set of coping skills to successfully navigate and positively adapt to their evolving environment. However, important sociocultural factors result in differences that occur developmentally for minority emerging adults (i.e., emerging adults with an identity that has been or is marginalized). While the transition to college holds vast potential, not all are afforded the same chances, and many individuals enter into this stage at varying degrees of readiness. Understanding the nuance and diversity of student preparedness for college and contextualizing these factors will better equip systems to support incoming students. Emerging adulthood for ethnic, racial minority students presents itself as an opportunity for growth and resiliency in the face of systemic adversity. Ethnic, racial identity (ERI) is defined as an identity that develops as a function of one’s ethnic-racial group membership. Research continues to demonstrate ERI as a resilience factor that promotes positive adjustment in young adulthood. Adaptive coping responses (e.g., engaging in help-seeking behavior, drawing on personal and community resources) have been identified as possible mechanisms through which ERI buffers youth against stressful life events, including discrimination. Additionally, trait mindfulness has been identified as a significant predictor of general psychological health, and mindfulness practice has been shown to be a self-regulatory strategy that promotes healthy stress responses and adaptive coping strategy selection. The current study employed a person-centered approach to explore emerging patterns across ethnic identity development and psychological well-being criterion variables among college freshmen. Data from 283 incoming college freshmen at Northeastern University were analyzed. The Brief COPE Acceptance and Emotional Support scales, the Five Factor Mindfulness Questionnaire, and MIEM Exploration and Affirmation measures were used to inform the cluster profiles. The TwoStep auto-clustering algorithm revealed an optimal three-cluster solution (BIC = 848.49), which classified 92.6% (n = 262) of participants in the sample into one of the three clusters. The clusters were characterized as ‘Mixed Adjustment’, ‘Lowest Adjustment’, and ‘Moderate Adjustment.’ Cluster composition varied significantly by ethnicity X² (2, N = 262) = 7.74 (p = .021) and gender X² (2, N = 259) = 10.40 (p = .034). The ‘Lowest Adjustment’ cluster contained the highest proportion of students of color, 41% (n = 32), and male-identifying students, 44.2% (n = 34). Follow-up analyses showed higher ERI exploration in ‘Moderate Adjustment’ cluster members, also reported higher levels of psychological distress, with significantly elevated depression scores (p = .011), psychological diagnoses of depression (p = .013), anxiety (p = .005) and psychiatric disorders (p = .025). Supporting prior research, students engaging with identity exploration processes often endure more psychological distress. These results indicate that students undergoing identity development may require more socialization and different services beyond normal strategies.

Keywords: adjustment, coping, college, emerging adulthood, ethnic-racial identity, psychological well-being, resilience

Procedia PDF Downloads 110
65 The Shrinking of the Pink Wave and the Rise of the Right-Wing in Latin America

Authors: B. M. Moda, L. F. Secco

Abstract:

Through free and fair elections and others less democratic processes, Latin America has been gradually turning into a right-wing political region. In order to understand these recent changes, this paper aims to discuss the origin and the traits of the pink wave in the subcontinent, the reasons for its current rollback and future projections for left-wing in the region. The methodology used in this paper will be descriptive and analytical combined with secondary sources mainly from the social and political sciences fields. The canons of the Washington Consensus was implemented by the majority of the Latin American governments in the 80s and 90s under the social democratic and right-wing parties. The neoliberal agenda caused political, social and economic dissatisfaction bursting into a new political configuration for the region. It started in 1998 when Hugo Chávez took the office in Venezuela through the Fifth Republic Movement under the socialist flag. From there on, Latin America was swiped by the so-called ‘pink wave’, term adopted to define the rising of self-designated left-wing or center-left parties with a progressive agenda. After Venezuela, countries like Chile, Brazil, Argentina, Uruguay, Bolivia, Equator, Nicaragua, Paraguay, El Salvador and Peru got into the pink wave. The success of these governments was due a post-neoliberal agenda focused on cash transfers programs, increasing of public spending, and the straightening of national market. The discontinuation of the preference for the left-wing started in 2012 with the coup against Fernando Lugo in Paraguay. In 2015, the chavismo in Venezuela lost the majority of the legislative seats. In 2016, an impeachment removed the Brazilian president Dilma Rousself from office who was replaced by the center-right vice-president Michel Temer. In the same year, Mauricio Macri representing the right-wing party Proposta Republicana was elected in Argentina. In 2016 center-right and liberal, Pedro Pablo Kuczynski was elected in Peru. In 2017, Sebastián Piñera was elected in Chile through the center-right party Renovación Nacional. The pink wave current rollback points towards some findings that can be arranged in two fields. Economically, the 2008 financial crisis affected the majority of the Latin American countries and the left-wing economic policies along with the end of the raw materials boom and the subsequent shrinking of economic performance opened a flank for popular dissatisfaction. In Venezuela, the 2014 oil crisis reduced the revenues for the State in more than 50% dropping social spending, creating an inflationary spiral, and consequently loss of popular support. Politically, the death of Hugo Chavez in 2013 weakened the ‘socialism of the twenty first century’ ideal, which was followed by the death of Fidel Castro, the last bastion of communism in the subcontinent. In addition, several cases of corruption revealed during the pink wave governments made the traditional politics unpopular. These issues challenge the left-wing to develop a future agenda based on innovation of its economic program, improve its legal and political compliance practices, and to regroup its electoral forces amid the social movements that supported its ascension back in the early 2000s.

Keywords: Latin America, political parties, left-wing, right-wing, pink wave

Procedia PDF Downloads 240
64 Biostabilisation of Sediments for the Protection of Marine Infrastructure from Scour

Authors: Rob Schindler

Abstract:

Industry-standard methods of mitigating erosion of seabed sediments rely on ‘hard engineering’ approaches which have numerous environmental shortcomings: (1) direct loss of habitat by smothering of benthic species, (2) disruption of sediment transport processes, damaging geomorphic and ecosystem functionality (3) generation of secondary erosion problems, (4) introduction of material that may propagate non-local species, and (5) provision of pathways for the spread of invasive species. Recent studies have also revealed the importance of biological cohesion, the result of naturally occurring extra-cellular polymeric substances (EPS), in stabilizing natural sediments. Mimicking the strong bonding kinetics through the deliberate addition of EPS to sediments – henceforth termed ‘biostabilisation’ - offers a means in which to mitigate against erosion induced by structures or episodic increases in hydrodynamic forcing (e.g. storms and floods) whilst avoiding, or reducing, hard engineering. Here we present unique experiments that systematically examine how biostabilisation reduces scour around a monopile in a current, a first step to realizing the potential of this new method of scouring reduction for a wide range of engineering purposes in aquatic substrates. Experiments were performed in Plymouth University’s recirculating sediment flume which includes a recessed scour pit. The model monopile was 0.048 m in diameter, D. Assuming a prototype monopile diameter of 2.0 m yields a geometric ratio of 41.67. When applied to a 10 m prototype water depth this yields a model depth, d, of 0.24 m. The sediment pit containing the monopile was filled with different biostabilised substrata prepared using a mixture of fine sand (D50 = 230 μm) and EPS (Xanthan gum). Nine sand-EPS mixtures were examined spanning EPS contents of 0.0% < b0 < 0.50%. Scour development was measured using a laser point gauge along a 530 mm centreline at 10 mm increments at regular periods over 5 h. Maximum scour depth and excavated area were determined at different time steps and plotted against time to yield equilibrium values. After 5 hours the current was stopped and a detailed scan of the final scour morphology was taken. Results show that increasing EPS content causes a progressive reduction in the equilibrium depth and lateral extent of scour, and hence excavated material. Very small amounts equating to natural communities (< 0.1% by mass) reduce scour rate, depth and extent of scour around monopiles. Furthermore, the strong linear relationships between EPS content, equilibrium scour depth, excavation area and timescales of scouring offer a simple index on which to modify existing scour prediction methods. We conclude that the biostabilisation of sediments with EPS may offer a simple, cost-effective and ecologically sensitive means of reducing scour in a range of contexts including OWFs, bridge piers, pipeline installation, and void filling in rock armour. Biostabilisation may also reduce economic costs through (1) Use of existing site sediments, or waste dredged sediments (2) Reduced fabrication of materials, (3) Lower transport costs, (4) Less dependence on specialist vessels and precise sub-sea assembly. Further, its potential environmental credentials may allow sensitive use of the seabed in marine protection zones across the globe.

Keywords: biostabilisation, EPS, marine, scour

Procedia PDF Downloads 166
63 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 146
62 Action Research-Informed Multiliteracies-Enhanced Pedagogy in an Online English for Academic Purposes Course

Authors: Heejin Song

Abstract:

Employing a critical action research approach that rejects essentialist onto-epistemological orientations to research in English language teaching (ELT) and interrogates the hegemonic relations in the knowledge construction and reconstruction processes, this study illuminates how an action research-informed pedagogical practice can transform the English for academic purposes (EAP) teaching to be more culturally and linguistically inclusive and critically oriented for English language learners’ advancement in academic literacies skills. More specifically, this paper aims to showcase the action research-informed pedagogical innovations that emphasize multilingual learners’ multiliteracies engagement and experiential education-oriented learning to facilitate the development of learners’ academic literacies, intercultural communicative competence, and inclusive global citizenship in the context of Canadian university EAP classrooms. The pedagogical innovations through action research embarked in response to growing discussions surrounding pedagogical possibilities of plurilingualism in ELT and synchronous online teaching. The paper is based on two iterations of action research over the pandemic years between 2020 and 2022. The data includes student work samples, focus group interviews, anonymous surveys, teacher feedback and comments on student work and teaching reflections. The first iteration of the action research focused on the affordances of multimodal expressions in individual learners’ academic endeavors for their literacy skills development through individual online activities such as ‘my language autobiography,’ ‘multimodal expression corner’ and public speeches. While these activities help English language learners enhance their knowledge and skills of English-spoken discourses, these tasks did not necessarily require learners’ team-based collaborative endeavors to complete the assigned tasks. Identifying this area for improvement in the instructional design, the second action research cycle/iteration emphasized collaborative performativity through newly added performance/action-based innovative learning tasks, including ‘situational role-playing’, ‘my cooking show & interview’, and group debates in order to provide learners increased opportunities to communicate with peers who joined the class virtually from different parts of the world and enhance learners’ intercultural competence through various strategic and pragmatic communicative skills to collaboratively achieve their shared goals (i.e., successful completion of the given group tasks). The paper exemplifies instances wherein learners’ unique and diverse linguistic and cultural strengths were amplified, and critical literacies were further developed through learners’ performance-oriented multiliteracies engagement. The study suggests that the action research-informed teaching practice that advocates for collaborative multiliteracies engagement serves to facilitate learners’ activation of their existing linguistic and cultural knowledge and contributes to the development of learners’ academic literacy skills. Importantly, the study illuminates that such action research-informed pedagogical initiatives create an inclusive space for learners to build a strong sense of connectedness as global citizens with increased intercultural awareness in their community of language and cultural practices, and further allow learners to actively participate in the construction of ‘collaborative relations of power’ with their peers.

Keywords: action research, EAP, higher education, multiliteracies

Procedia PDF Downloads 79
61 Biophilic Design Strategies: Four Case-Studies from Northern Europe

Authors: Carmen García Sánchez

Abstract:

The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.

Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture

Procedia PDF Downloads 100
60 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 136
59 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
58 Relevance of Dosing Time for Everolimus Toxicity on Thyroid Gland and Hormones in Mice

Authors: Dilek Ozturk, Narin Ozturk, Zeliha Pala Kara, Engin Kaptan, Serap Sancar Bas, Nurten Ozsoy, Alper Okyar

Abstract:

Most physiological processes oscillate in a rhythmic manner in mammals including metabolism and energy homeostasis, locomotor activity, hormone secretion, immune and endocrine system functions. Endocrine body rhythms are tightly regulated by the circadian timing system. The hypothalamic-pituitary-thyroid (HPT) axis is under circadian control at multiple levels from hypothalamus to thyroid gland. Since circadian timing system controls a variety of biological functions in mammals, circadian rhythms of biological functions may modify the drug tolerability/toxicity depending on the dosing time. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer agent that is active against many cancers. It was also found to be active in medullary thyroid cancer. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus on the thyroid gland and hormones in mice. Healthy C57BL/6J mice were synchronized with 12h:12h Light-Dark cycle (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding to Light onset). Everolimus was administered to male (5 mg/kg/day) and female mice (15 mg/kg/day) orally at ZT1-rest period- and ZT13-activity period- for 4 weeks; body weight loss, clinical signs and possible changes in serum thyroid hormone levels (TSH and free T4) were examined. Histological alterations in the thyroid gland were evaluated according to the following criteria: follicular size, colloid density and viscidity, height of the follicular epithelium and the presence of necrotic cells. The statistical significance between differences was analyzed with ANOVA. Study findings included everolimus-related diarrhea, decreased activity, decreased body weight gains, alterations in serum TSH levels, and histopathological changes in thyroid gland. Decreases in mean body weight gains were more evident in mice treated at ZT1 as compared to ZT13 (p < 0.001, for both sexes). Control tissue sections of thyroid glands exhibited well-organized histoarchitecture when compared to everolimus-treated groups. Everolimus caused histopathological alterations in thyroid glands in male (5 mg/kg, slightly) and female mice (15 mg/kg; p < 0.01 for both ZT as compared to their controls) irrespective of dosing-time. TSH levels were slightly decreased upon everolimus treatment at ZT13 in both males and females. Conversely, increases in TSH levels were observed when everolimus treated at ZT1 in both males (5 mg/kg; p < 0.05) and females (15 mg/kg; slightly). No statistically significant alterations in serum free T4 levels were observed. TSH and free T4 is clinically important thyroid hormones since a number of disease states have been linked to alterations in these hormones. Serum free T4 levels within the normal ranges in the presence of abnormal serum TSH levels in everolimus treated mice may suggest subclinical thyroid disease which may have repercussions on the cardiovascular system, as well as on other organs and systems. Our study has revealed the histological damage on thyroid gland induced by subacute everolimus administration, this effect was irrespective of dosing time. However, based on the body weight changes and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13 in both male and females. Yet, effects of everolimus on thyroid functions may deserve further studies regarding their clinical importance and chronotoxicity.

Keywords: circadian rhythm, chronotoxicity, everolimus, thyroid gland, thyroid hormones

Procedia PDF Downloads 350
57 Enabling Wire Arc Additive Manufacturing in Aircraft Landing Gear Production and Its Benefits

Authors: Jun Wang, Chenglei Diao, Emanuele Pagone, Jialuo Ding, Stewart Williams

Abstract:

As a crucial component in aircraft, landing gear systems are responsible for supporting the plane during parking, taxiing, takeoff, and landing. Given the need for high load-bearing capacity over extended periods, 300M ultra-high strength steel (UHSS) is often the material of choice for crafting these systems due to its exceptional strength, toughness, and fatigue resistance. In the quest for cost-effective and sustainable manufacturing solutions, Wire Arc Additive Manufacturing (WAAM) emerges as a promising alternative for fabricating 300M UHSS landing gears. This is due to its advantages in near-net-shape forming of large components, cost-efficiency, and reduced lead times. Cranfield University has conducted an extensive preliminary study on WAAM 300M UHSS, covering feature deposition, interface analysis, and post-heat treatment. Both Gas Metal Arc (GMA) and Plasma Transferred Arc (PTA)-based WAAM methods were explored, revealing their feasibility for defect-free manufacturing. However, as-deposited 300M features showed lower strength but higher ductility compared to their forged counterparts. Subsequent post-heat treatments were effective in normalising the microstructure and mechanical properties, meeting qualification standards. A 300M UHSS landing gear demonstrator was successfully created using PTA-based WAAM, showcasing the method's precision and cost-effectiveness. The demonstrator, measuring Ф200mm x 700mm, was completed in 16 hours, using 7 kg of material at a deposition rate of 1.3kg/hr. This resulted in a significant reduction in the Buy-to-Fly (BTF) ratio compared to traditional manufacturing methods, further validating WAAM's potential for this application. A "cradle-to-gate" environmental impact assessment, which considers the cumulative effects from raw material extraction to customer shipment, has revealed promising outcomes. Utilising Wire Arc Additive Manufacturing (WAAM) for landing gear components significantly reduces the need for raw material extraction and refinement compared to traditional subtractive methods. This, in turn, lessens the burden on subsequent manufacturing processes, including heat treatment, machining, and transportation. Our estimates indicate that the carbon footprint of the component could be halved when switching from traditional machining to WAAM. Similar reductions are observed in embodied energy consumption and other environmental impact indicators, such as emissions to air, water, and land. Additionally, WAAM offers the unique advantage of part repair by redepositing only the necessary material, a capability not available through conventional methods. Our research shows that WAAM-based repairs can drastically reduce environmental impact, even when accounting for additional transportation for repairs. Consequently, WAAM emerges as a pivotal technology for reducing environmental impact in manufacturing, aiding the industry in its crucial and ambitious journey towards Net Zero. This study paves the way for transformative benefits across the aerospace industry, as we integrate manufacturing into a hybrid solution that offers substantial savings and access to more sustainable technologies for critical component production.

Keywords: WAAM, aircraft landing gear, microstructure, mechanical performance, life cycle assessment

Procedia PDF Downloads 159
56 Electronic Raman Scattering Calibration for Quantitative Surface-Enhanced Raman Spectroscopy and Improved Biostatistical Analysis

Authors: Wonil Nam, Xiang Ren, Inyoung Kim, Masoud Agah, Wei Zhou

Abstract:

Despite its ultrasensitive detection capability, surface-enhanced Raman spectroscopy (SERS) faces challenges as a quantitative biochemical analysis tool due to the significant dependence of local field intensity in hotspots on nanoscale geometric variations of plasmonic nanostructures. Therefore, despite enormous progress in plasmonic nanoengineering of high-performance SERS devices, it is still challenging to quantitatively correlate the measured SERS signals with the actual molecule concentrations at hotspots. A significant effort has been devoted to developing SERS calibration methods by introducing internal standards. It has been achieved by placing Raman tags at plasmonic hotspots. Raman tags undergo similar SERS enhancement at the same hotspots, and ratiometric SERS signals for analytes of interest can be generated with reduced dependence on geometrical variations. However, using Raman tags still faces challenges for real-world applications, including spatial competition between the analyte and tags in hotspots, spectral interference, laser-induced degradation/desorption due to plasmon-enhanced photochemical/photothermal effects. We show that electronic Raman scattering (ERS) signals from metallic nanostructures at hotspots can serve as the internal calibration standard to enable quantitative SERS analysis and improve biostatistical analysis. We perform SERS with Au-SiO₂ multilayered metal-insulator-metal nano laminated plasmonic nanostructures. Since the ERS signal is proportional to the volume density of electron-hole occupation in hotspots, the ERS signals exponentially increase when the wavenumber is approaching the zero value. By a long-pass filter, generally used in backscattered SERS configurations, to chop the ERS background continuum, we can observe an ERS pseudo-peak, IERS. Both ERS and SERS processes experience the |E|⁴ local enhancements during the excitation and inelastic scattering transitions. We calibrated IMRS of 10 μM Rhodamine 6G in solution by IERS. The results show that ERS calibration generates a new analytical value, ISERS/IERS, insensitive to variations from different hotspots and thus can quantitatively reflect the molecular concentration information. Given the calibration capability of ERS signals, we performed label-free SERS analysis of living biological systems using four different breast normal and cancer cell lines cultured on nano-laminated SERS devices. 2D Raman mapping over 100 μm × 100 μm, containing several cells, was conducted. The SERS spectra were subsequently analyzed by multivariate analysis using partial least square discriminant analysis. Remarkably, after ERS calibration, MCF-10A and MCF-7 cells are further separated while the two triple-negative breast cancer cells (MDA-MB-231 and HCC-1806) are more overlapped, in good agreement with the well-known cancer categorization regarding the degree of malignancy. To assess the strength of ERS calibration, we further carried out a drug efficacy study using MDA-MB-231 and different concentrations of anti-cancer drug paclitaxel (PTX). After ERS calibration, we can more clearly segregate the control/low-dosage groups (0 and 1.5 nM), the middle-dosage group (5 nM), and the group treated with half-maximal inhibitory concentration (IC50, 15 nM). Therefore, we envision that ERS calibrated SERS can find crucial opportunities in label-free molecular profiling of complicated biological systems.

Keywords: cancer cell drug efficacy, plasmonics, surface-enhanced Raman spectroscopy (SERS), SERS calibration

Procedia PDF Downloads 138
55 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making

Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang

Abstract:

Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.

Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test

Procedia PDF Downloads 179
54 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies

Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho

Abstract:

The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.

Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system

Procedia PDF Downloads 179
53 Classical Improvisation Facilitating Enhanced Performer-Audience Engagement and a Mutually Developing Impulse Exchange with Concert Audiences

Authors: Pauliina Haustein

Abstract:

Improvisation was part of Western classical concert culture and performers’ skill sets until early 20th century. Historical accounts, as well as recent studies, indicate that improvisatory elements in the programme may contribute specifically towards the audiences’ experience of enhanced emotional engagement during the concert. This paper presents findings from the author’s artistic practice research, which explored re-introducing improvisation to Western classical performance practice as a musician (cellist and ensemble partner/leader). In an investigation of four concert cycles, the performer-researcher sought to gain solo and chamber music improvisation techniques (both related to and independent of repertoire), conduct ensemble improvisation rehearsals, design concerts with an improvisatory approach, and reflect on interactions with audiences after each concert. Data was collected through use of reflective diary, video recordings, measurement of sound parameters, questionnaires, a focus group, and interviews. The performer’s empirical experiences and findings from audience research components were juxtaposed and interrogated to better understand the (1) rehearsal and planning processes that enable improvisatory elements to return to Western classical concert experience and (2) the emotional experience and type of engagement that occur throughout the concert experience for both performer and audience members. This informed the development of a concert model, in which a programme of solo and chamber music repertoire and improvisations were combined according to historically evidenced performance practice (including free formal solo and ensemble improvisations based on audience suggestions). Inspired by historical concert culture, where elements of risk-taking, spontaneity, and audience involvement (such as proposing themes for fantasies) were customary, this concert model invited musicians to contribute to the process personally and creatively at all stages, from programme planning, and throughout the live concert. The type of democratic, personal, creative, and empathetic collaboration that emerged, as a result, appears unique in Western classical contexts, rather finding resonance in jazz ensemble, drama, or interdisciplinary settings. The research identified features of ensemble improvisation, such as empathy, emergence, mutual engagement, and collaborative creativity, that became mirrored in audience’s responses, generating higher levels of emotional engagement, empathy, inclusivity, and a participatory, co-creative experience. It appears that duringimprovisatory moments in the concert programme, audience members started feeling more like active participants in za\\a creative, collaborative exchange and became stakeholders in a deeper phenomenon of meaning-making and narrativization. Examining interactions between all involved during the concert revealed that performer-audience impulse exchange occurred on multiple levels of awareness and seemed to build upon each other, resulting in particularly strong experiences of both performer and audience’s engagement. This impact appeared especially meaningful for audience members who were seldom concertgoers and reported little familiarity with classical music. The study found that re-introducing improvisatory elements to Western classical concert programmes has strong potential in increasing audience’s emotional engagement with the musical performance, enabling audience members to connect more personally with the individual performers, and in reaching new-to-classical-music audiences.

Keywords: artistic research, audience engagement, audience experience, classical improvisation, ensemble improvisation, emotional engagement, improvisation, improvisatory approach, musical performance, practice research

Procedia PDF Downloads 128
52 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 108
51 Networks, Regulations and Public Action: The Emerging Experiences of Sao Paulo

Authors: Lya Porto, Giulia Giacchè, Mario Aquino Alves

Abstract:

The paper aims to describe the linkage between government and civil society proposing a study on agro-ecological agriculture policy and urban action in São Paulo city underling the main achievements obtained. The negotiation processes between social movements and the government (inputs) and its results on political regulation and public action for Urban Agriculture (UA) in São Paulo city (outputs) have been investigated. The method adopted is qualitative, with techniques of semi-structured interviews, participant observation, and documental analysis. The authors conducted 30 semi-structured interviews with organic farmers, activists, governmental and non-governmental managers. Participant observation was conducted in public gardens, urban farms, public audiences, democratic councils, and social movements meetings. Finally, public plans and laws were also analyzed. São Paulo city with around 12 million inhabitants spread out in a 1522 km2 is the economic capital of Brazil, marked by spatial and socioeconomic segregation, currently aggravated by environmental crisis, characterized by water scarcity, pollution, and climate changes. In recent years, Urban Agriculture (UA) social movements gained strength and struggle for a different city with more green areas, organic food production, and public occupation. As the dynamics of UA occurs by the action of multiple actresses and institutions that struggle to build multiple senses on UA, the analysis will be based on literature about solidarity economy, governance, public action and networks. Those theories will mark out the analysis that will emphasize the approach of inter-subjectivity built between subjects, as well as the hybrid dynamics of multiple actors and spaces in the construction of policies for UA. Concerning UA we identified four main typologies based on land ownership, main function (economic or activist), form of organization of the space, and type of production (organic or not). The City Hall registers 500 productive unities of agriculture, with around 1500 producers, but researcher estimated a larger number of unities. Concerning the social movements we identified three categories that differ in goals and types of organization, but all of them work by networks of activists and/or organizations. The first category does not consider themselves as a movement, but a network. They occupy public spaces to grow organic food and to propose another type of social relations in the city. This action is similar to what became known as the green guerrillas. The second is configured as a movement that is structured to raise awareness about agro-ecological activities. The third one is a network of social movements, farmers, organizations and politicians that work focused on pressure and negotiation with executive and legislative government to approve regulations and policies on organic and agro-ecological Urban Agriculture. We conclude by highlighting how the interaction among institutions and civil society produced important achievements for recognition and implementation of UA within the city. Some results of this process are awareness for local production, legal and institutional recognition of the rural zone around the city into the planning tool, the investment on organic school public procurements, the establishment of participatory management of public squares, the inclusion of UA on Municipal Strategic Plan and Master Plan.

Keywords: public action, policies, agroecology, urban and peri-urban agriculture, Sao Paulo

Procedia PDF Downloads 294
50 Production of Bioethanol from Oil PalmTrunk by Cocktail Carbohydrases Enzyme Produced by Thermophilic Bacteria Isolated from Hot spring in West Sumatera, Indonesia

Authors: Yetti Marlida, Syukri Arif, Nadirman Haska

Abstract:

Recently, alcohol fuels have been produced on industrial scales by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane etc. The enzymatic hydrolysis of cellulosic materials to produce fermentable sugars has an enormous potential in meeting global bioenergy demand through the biorefinery concept, since agri-food processes generate millions of tones of waste each year (Xeros and Christakopoulos 2009) such as sugar cane baggase , wheat straw, rice straw, corn cob, and oil palm trunk. In fact oil palm trunk is one of the most abundant lignocellulosic wastes by-products worldwide especially come from Malaysia, Indonesia and Nigeria and provides an alternative substrate to produce useful chemicals such as bioethanol. Usually, from the ages 3 years to 25 years, is the economical life of oil palm and after that, it is cut for replantation. The size of trunk usually is 15-18 meters in length and 46-60 centimeters in diameter. The trunk after cutting is agricultural waste causing problem in elimination but due to the trunk contains about 42% cellulose, 34.4%hemicellulose, 17.1% lignin and 7.3% other compounds,these agricultural wastes could make value added products (Pumiput, 2006).This research was production of bioethanol from oil palm trunk via saccharafication by cocktail carbohydrases enzymes. Enzymatic saccharification of acid treated oil palm trunk was carried out in reaction mixture containing 40 g treated oil palm trunk in 200 ml 0.1 M citrate buffer pH 4.8 with 500 unit/kg amylase for treatment A: Treatment B: Treatment A + 500 unit/kg cellulose; C: treatment B + 500 unit/kgg xylanase: D: treatment D + 500 unit/kg ligninase and E: OPT without treated + 500 unit/kg amylase + 500 unit/kg cellulose + 500 unit/kg xylanase + 500 unit/kg ligninase. The reaction mixture was incubated on a water bath rotary shaker adjusted to 600C and 75 rpm. The samples were withdraw at intervals 12 and 24, 36, 48,60, and 72 hr. For bioethanol production in biofermentor of 5L the hydrolysis product were inoculated a loop of Saccharomyces cerevisiae and then incubated at 34 0C under static conditions. Samples are withdraw after 12, 24, 36, 48 and 72 hr for bioethanol and residual glucose. The results of the enzymatic hidrolysis (Figure1) showed that the treatment B (OPT hydrolyzed with amylase and cellulase) have optimum condition for glucose production, where was both of enzymes can be degraded OPT perfectly. The same results also reported by Primarini et al., (2012) reported the optimum conditions the hydrolysis of OPT was at concentration of 25% (w /v) with 0.3% (w/v) amylase, 0.6% (w /v) glucoamylase and 4% (w/v) cellulase. In the Figure 2 showed that optimum bioethanol produced at 48 hr after incubation,if time increased the biothanol decreased. According Roukas (1996), a decrease in the concentration of ethanol occur at excess glucose as substrate and product inhibition effects. Substrate concentration is too high reduces the amount of dissolved oxygen, although in very small amounts, oxygen is still needed in the fermentation by Saccaromyces cerevisiae to keep life in high cell concentrations (Nowak 2000, Tao et al. 2005). The results of the research can be conluded that the optimum enzymatic hydrolysis occured when the OPT added with amylase and cellulase and optimum bioethanol produced at 48 hr incubation using Saccharomyses cerevicea whereas 18.08 % bioethanol produced from glucose conversion. This work was funded by Directorate General of Higher Education (DGHE), Ministry of Education and Culture, contract no.245/SP2H/DIT.LimtabMas/II/2013

Keywords: oil palm trunk, enzymatic hydrolysis, saccharification

Procedia PDF Downloads 514
49 A 3d Intestine-On-Chip Model Allows Colonization with Commensal Bacteria to Study Host-Microbiota Interaction

Authors: Michelle Maurer, Antonia Last, Mark S. Gresnigt, Bernhard Hube, Alexander S. Mosig

Abstract:

The intestinal epithelium forms an essential barrier to prevent translocation of microorganisms, toxins or other potentially harmful molecules into the bloodstream. In particular, dendritic cells of the intestinal epithelium orchestrate an adapted response of immune tolerance to commensals and immune defense against invading pathogens. Systemic inflammation is typically associated with a dysregulation of this adapted immune response and is accompanied by a disruption of the epithelial and endothelial gut barrier which enables dissemination of pathogens within the human body. To understand the pathophysiological mechanisms underlying the inflammation-associated gut barrier breakdown, it is crucial to elucidate the complex interplay of the host and the intestinal microbiome. A microfluidically perfused three-dimensional intestine-on-chip model was established to emulate these processes in the presence of immune cells, commensal bacteria, and facultative pathogens. Multi-organ tissue flow (MOTiF) biochips made from polystyrene were used for microfluidic perfusion of the intestinal tissue model. The biochips are composed of two chambers separated by a microporous membrane. Each chamber is connected to inlet and outlet channels allowing independent perfusion of the individual channels and application of microfluidic shear stress. Human umbilical vein endothelial cells (HUVECs), monocyte-derived macrophages and intestinal epithelial cells (Caco-2) were assembled on the biochip membrane. Following 7 – 14 days of growth in the presence of physiological flow conditions, the epithelium was colonized with the commensal bacterium Lactobacillus rhamnosus, while the endothelium was perfused with peripheral blood mononuclear cells (PBMCs). Additionally, L. rhamnosus was co-cultivated with the opportunistic fungal pathogen Candida albicans. Within one week of perfusion, the epithelial cells formed self-organized and well-polarized villus- and crypt-like structures that resemble essential morphological characteristics of the human intestine. Dendritic cells were differentiated in the epithelial tissue that specifically responds to bacterial lipopolysaccharide (LPS) challenge. LPS is well-tolerated at the luminal epithelial side of the intestinal model without signs of tissue damage or induction of an inflammatory response, even in the presence of circulating PBMC at the endothelial lining. In contrast, LPS stimulation at the endothelial side of the intestinal model triggered the release of pro-inflammatory cytokines such as TNF, IL-1β, IL-6, and IL-8 via activation of macrophages residing in the endothelium. Perfusion of the endothelium with PBMCs led to an enhanced cytokine release. L. rhamnosus colonization of the model was tolerated in the immune competent tissue model and was demonstrated to reduce damage induced by C. albicans infection. A microfluidic intestine-on-chip model was developed to mimic a systemic infection with a dysregulated immune response under physiological conditions. The model facilitates the colonization of commensal bacteria and co-cultivation with facultative pathogenic microorganisms. Both, commensal bacteria alone and facultative pathogens controlled by commensals, are tolerated by the host and contribute to cell signaling. The human intestine-on-chip model represents a promising tool to mimic microphysiological conditions of the human intestine and paves the way for more detailed in vitro studies of host-microbiota interactions under physiologically relevant conditions.

Keywords: host-microbiota interaction, immune tolerance, microfluidics, organ-on-chip

Procedia PDF Downloads 131
48 The Securitization of the European Migrant Crisis (2015-2016): Applying the Insights of the Copenhagen School of Security Studies to a Comparative Analysis of Refugee Policies in Bulgaria and Hungary

Authors: Tatiana Rizova

Abstract:

The migrant crisis, which peaked in 2015-2016, posed an unprecedented challenge to the European Union’s (EU) newest member states, including Bulgaria and Hungary. Their governments had to formulate sound migration policies with expediency and sensitivity to the needs of millions of people fleeing violent conflicts in the Middle East and failed states in North Africa. Political leaders in post-communist countries had to carefully coordinate with other EU member states on joint policies and solutions while minimizing the risk of alienating their increasingly anti-migrant domestic constituents. Post-communist member states’ governments chose distinct policy responses to the crisis, which were dictated by factors such as their governments’ partisan stances on migration, their views of the European Union, and the decision to frame the crisis as a security or a humanitarian issue. This paper explores how two Bulgarian governments (Boyko Borisov’s second and third government formed during the 43rd and 44th Bulgarian National Assembly, respectively) navigated the processes of EU migration policy making and managing the expectations of their electorates. Based on a comparative analysis of refugee policies in Bulgaria and Hungary during the height of the crisis (2015-2016) and a temporal analysis of refugee policies in Bulgaria (2015-2018), the paper advances the following conclusions. Drawing on insights of the Copenhagen school of security studies, the paper argues that cultural concerns dominated domestic debates in both Bulgaria and Hungary; both governments framed the issue predominantly as a matter of security rather than humanitarian disaster. Regardless of the similarities in issue framing, however, the two governments sought different paths of tackling the crisis. While the Bulgarian government demonstrated its willingness to comply with EU decisions (such as the proposal for mandatory quotas for refugee relocation), the Hungarian government defied EU directives and became a leading voice of dissent inside the EU. The current Bulgarian government (April 2017 - present) appears to be committed to complying with EU decisions and accepts the strategy of EU burden-sharing, while the Hungarian government has continually snubbed the EU’s appeals for cooperation despite the risk of hefty financial penalties. Hungary’s refugee policies have been influenced by the parliamentary representation of the far right-wing party Movement for a Better Hungary (Jobbik), which has encouraged the majority party (FIDESZ) to adopt harsher anti-migrant rhetoric and more hostile policies toward refugees. Bulgaria’s current government is a coalition of the center-right Citizens for a European Development of Bulgaria (GERB) and its far right-wing junior partners – the United Patriots (comprised of three nationalist political parties). The parliamentary presence of Jobbik in Hungary’s parliament has magnified the anti-migrant stance, rhetoric, and policies of Mr. Orbán’s Civic Alliance; we have yet to observe a substantial increase in the anti-migrant rhetoric and policies in Bulgaria’s case. Analyzing responses to the migrant/refugee crisis is a critical opportunity to understand how issues of cultural identity and belonging, inclusion and exclusion, regional integration and disintegration are debated and molded into policy in Europe’s youngest member states in the broader EU context.

Keywords: Copenhagen School, migrant crisis, refugees, security

Procedia PDF Downloads 121
47 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students

Authors: Kavita Goel, Donald Winchester

Abstract:

In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.

Keywords: cognitive load theory, learning style, instructional environment, working memory

Procedia PDF Downloads 145
46 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae

Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres

Abstract:

Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.

Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling

Procedia PDF Downloads 147
45 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 161
44 Branding Capability Developed from Country-Specific and Firm-Specific Resources for Internationalizing Small and Medium Enterprises

Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong

Abstract:

There has recently been a notable rise in the number of emerging-market industrial small and medium-sized enterprises (SMEs) that have managed to upgrade their operations. Evolving from original equipment manufacturing (OEM) into value-added original or own brand manufacturing (OBM) in such firms represents a specific process of internationalization. The OEM-OBM upgrade requires development of a firm’s own brand. In this respect, the extant literature points out that emerging-market industrial marketers (latecomers) have developed some marketing capabilities, of which branding has been identified as one of the most important. In specific, an industrial non-brand marketer (OEM) marks the division of labor between manufacturing and branding (as part of marketing). In light of this discussion, this research argues that branding capability plays a critical role in supporting the evolution of manufacture upgrade. This is because a smooth transformation from OEM to OBM entails the establishment of strong brands through which branding capability is developed. Accordingly, branding capability can be exemplified as a series of processes and practices in relation to mobilizing branding resources and orchestrating branding activities, which will result in the establishment of business relationships, greater acceptance of business partners (channels, suppliers), and increased industrial brand equity in the firm as key resource advantages). For the study purpose, Taiwan was chosen as the research context, representing a typical case that exemplifies the industrial development path of more-established emerging markets, namely, transformation from OEM to OBM. This research adopted a two-phase research design comprising exploratory (a qualitative study) and confirmatory approaches (a survey study) The findings show that: Country-specific advantage is positively related to branding capability for internationalizing SMEs. Firm-specific advantage is positively related to branding capability for internationalizing SMEs. Hsing-Hua Stella Chang is Assistant Professor with National Taichung University of Education, International Master of Business Administration, (Yingcai Campus) No.227, Minsheng Rd., West Dist., Taichung City 40359, Taiwan, R.O.C. (phone: 886-22183612; e-mail: [email protected]). Mong-Ching Lin is PhD candidate with National Sun Yat-Sen University, Department of Business Management, 70 Lien-hai Rd., Kaohsiung 804, Taiwan, R.O.C. (e-mail: [email protected]). Cher-Min Fong is Full Professor with National Sun Yat-Sen University, Department of Business Management, 70 Lien-hai Rd., Kaohsiung 804, Taiwan, R.O.C. (e-mail: [email protected]). Branding capability is positively related to international performance for internationalizing SMEs. This study presents a pioneering effort to distinguish industrial brand marketers from non-brand marketers in exploring the role of branding capability in the internationalizing small and medium-sized industrial brand marketers from emerging markets. Specifically, when industrial non-brand marketers (OEMs) enter into a more advanced stage of internationalization (i.e., OBM), they must overcome disadvantages (liabilities of smallness, foreignness, outsidership) that do not apply in the case of incumbent developed-country MNEs with leading brands. Such critical differences mark the urgency and significance of distinguishing industrial brand marketers from non-brand marketers on issues relating to their value-adding branding and marketing practices in international markets. This research thus makes important contributions to the international marketing, industrial branding, and SME internationalization literature.

Keywords: brand marketers, branding capability, emerging markets, SME internationalization

Procedia PDF Downloads 81
43 Impact of Elevated Temperature on Spot Blotch Development in Wheat and Induction of Resistance by Plant Growth Promoting Rhizobacteria

Authors: Jayanwita Sarkar, Usha Chakraborty, Bishwanath Chakraborty

Abstract:

Plants are constantly interacting with various abiotic and biotic stresses. In changing climate scenario plants are continuously modifying physiological processes to adapt to changing environmental conditions which profoundly affect plant-pathogen interactions. Spot blotch in wheat is a fast-rising disease in the warmer plains of South Asia where the rise in minimum average temperature over most of the year already affecting wheat production. Hence, the study was undertaken to explore the role of elevated temperature in spot blotch disease development and modulation of antioxidative responses by plant growth promoting rhizobacteria (PGPR) for biocontrol of spot blotch at high temperature. Elevated temperature significantly increases the susceptibility of wheat plants to spot blotch causing pathogen Bipolaris sorokiniana. Two PGPR Bacillus safensis (W10) and Ochrobactrum pseudogrignonense (IP8) isolated from wheat (Triticum aestivum L.) and blady grass (Imperata cylindrical L.) rhizophere respectively, showing in vitro antagonistic activity against Bipolaris sorokiniana were tested for growth promotion and induction of resistance against spot blotch in wheat. GC-MS analysis showed that Bacillus safensis (W10) and Ochrobactrum pseudogrignonense (IP8) produced antifungal and antimicrobial compounds in culture. Seed priming with these two bacteria significantly increase growth, modulate antioxidative signaling and induce resistance and eventually reduce disease incidence in wheat plants at optimum as well as elevated temperature which was further confirmed by indirect immunofluorescence assay using polyclonal antibody raised against Bipolaris sorokiniana. Application of the PGPR led to enhancement in activities of plant defense enzymes- phenylalanine ammonia lyase, peroxidase, chitinase and β-1,3 glucanase in infected leaves. Immunolocalization of chitinase and β-1,3 glucanase in PGPR primed and pathogen inoculated leaf tissue was further confirmed by transmission electron microscopy using PAb of chitinase, β-1,3 glucanase and gold labelled conjugates. Activity of ascorbate-glutathione redox cycle related enzymes such as ascorbate peroxidase, superoxide dismutase and glutathione reductase along with antioxidants such as carotenoids, glutathione and ascorbate and osmolytes like proline and glycine betain accumulation were also increased during disease development in PGPR primed plant in comparison to unprimed plants at high temperature. Real-time PCR analysis revealed enhanced expression of defense genes- chalcone synthase and phenyl alanineammonia lyase. Over expression of heat shock proteins like HSP 70, small HSP 26.3 and heat shock factor HsfA3 in PGPR primed plants effectively protect plants against spot blotch infection at elevated temperature as compared with control plants. Our results revealed dynamic biochemical cross talk between elevated temperature and spot blotch disease development and furthermore highlight PGPR mediated array of antioxidative and molecular alterations responsible for induction of resistance against spot blotch disease at elevated temperature which seems to be associated with up-regulation of defense genes, heat shock proteins and heat shock factors, less ROS production, membrane damage, increased expression of redox enzymes and accumulation of osmolytes and antioxidants.

Keywords: antioxidative enzymes, defense enzymes, elevated temperature, heat shock proteins, PGPR, Real-Time PCR, spot blotch, wheat

Procedia PDF Downloads 171
42 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 105