Search results for: empirical mode decomposition (EMD)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4937

Search results for: empirical mode decomposition (EMD)

107 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 106
106 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications

Authors: Deon Bezuidenhout

Abstract:

Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.

Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft

Procedia PDF Downloads 268
105 Disposal Behavior of Extreme Poor People Living in Guatemala at the Base of the Pyramid

Authors: Katharina Raab, Ralf Wagner

Abstract:

With the decrease of poverty, the focus on the solid waste challenge shifts away from affluent, mostly Westernized consumers to the base of the pyramid. The relevance of considering the disposal behavior of impoverished people arises from improved welfare, leading to an increase in consumption opportunities and, consequently, of waste production. In combination with the world’s growing population the relevance of the topic increases, because solid waste management has global impacts on consumers’ welfare. The current annual municipal solid waste generation is estimated to 1.9 billion tonnes, 30% remains uncollected. As for the collected 70% is landfilling and dumping, 19% is recycled or recovered, 11% is led to energy recovery facilities. Therefore, aim is to contribute by adding first insights about poor people's disposal behaviors, including the framing of their rationalities, emotions and cognitions. The study provides novel empirical results obtained from qualitative semi-structured in-depth interviews near Guatemala City. In the study’s framework consumers have to choose from three options when deciding what to do with their obsolete possessions: Keeping the product: The main reason for this is the respondent´s emotional attachment to a product. Further, there is a willingness to use the same product under a different scope when it loses its functionality–they recycle their belongings in a customized and sustainable way. Permanently disposing of the product: The study reveals two dominant disposal methods: burning in front of their homes and throwing away in the physical environment. Respondents clearly recognized the disadvantages of burning toxic durables, like electronics. Giving a product away as a gift supports the integration of individuals in their peer networks of family and friends. Temporarily disposing of the product: Was not mentioned–to be specific, rent or lend a product to someone else was out of question. Contrasting the background to which extend poor people are aware of the consequences of their disposal decisions and how they feel about and rationalize their actions were quite unexpected. Respondents reported that they are worried about future consequences with impacts they cannot anticipate now–they are aware that their behaviors harm their health and the environment. Additionally, they expressed concern about the impact this disposal behavior would have on others’ well-being and are therefore sensitive to the waste that surrounds them. Concluding, the BoP-framed life and Westernized consumption, both fit in a circular economy pattern, but the nature of how to recycle and dispose separates these two societal groups. Both systems own a solid waste management system, but people living in slum-type districts and rural areas of poor countries are less interested in connecting to the system–they are primarily afraid of the costs. Further, it can be said that a consumer’s perceived effectiveness is distinct from environmental concerns, but contributes to forecasting certain pro-ecological behaviors. Considering the rationales underlying disposal decisions, thoughtfulness is a well-established determinant of disposition behavior. The precipitating events, emotions and decisions associated with the act of disposing of products are important because these decisions can trigger different results for the disposal process.

Keywords: base of the pyramid, disposal behavior, poor consumers, solid waste

Procedia PDF Downloads 143
104 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory

Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang

Abstract:

As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.

Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation

Procedia PDF Downloads 111
103 Preliminary Study Investigating Trunk Muscle Fatigue and Cognitive Function in Event Riders during a Simulated Jumping Test

Authors: Alice Carter, Lucy Dumbell, Lorna Cameron, Victoria Lewis

Abstract:

The Olympic discipline of eventing is the triathlon of equestrian sport, consisting of dressage, cross-country and show jumping. Falls on the cross-country are common and can be serious even causing death to rider. Research identifies an increased risk of a fall with an increasing number of obstacles and for jumping efforts later in the course suggesting fatigue maybe a contributing factor. Advice based on anecdotal evidence suggests riders undertake strength and conditioning programs to improve their ‘core’, thus improving their ability to maintain and control their riding position. There is little empirical evidence to support this advice. Therefore, the aim of this study is to investigate truck muscle fatigue and cognitive function during a simulated jumping test. Eight adult riders participated in a riding test on a Racewood Event simulator for 10 minutes, over a continuous jumping programme. The SEMG activity of six trunk muscles were bilaterally measured at every minute, and normalised root mean squares (RMS) and median frequencies (MDF) were computed from the EMG power spectra. Visual analogue scales (VAS) measuring Fatigue and Pain levels and Cognitive Function ‘tapping’ tests were performed before and after the riding test. Average MDF values for all muscles differed significantly between each sampled minute (p = 0.017), however a consistent decrease from Minute 1 and Minute 9 was not found, suggesting the trunk muscles fatigued and then recovered as other muscle groups important in maintaining the riding position during dynamic movement compensated. Differences between the MDF and RMS of different muscles were highly significant (H=213.01, DF=5, p < 0.001), supporting previous anecdotal evidence that different trunk muscles carry out different roles of posture maintenance during riding. RMS values were not significantly different between the sampled minutes or between riders, suggesting the riding test produced a consistent and repeatable effect on the trunk muscles. MDF values differed significantly between riders (H=50.8, DF = 5, p < 0.001), suggesting individuals may experience localised muscular fatigue of the same test differently, and that other parameters of physical fitness should be investigated to provide conclusions. Lumbar muscles were shown to be important in maintaining the position, therefore physical training program should focus on these areas. No significant differences were found between pre- and post-riding test VAS Pain and Fatigue scores or cognitive function test scores, suggesting the riding test was not significantly fatiguing for participants. However, a near significant correlation was found between time of riding test and VAS Pain score (p = 0.06), suggesting somatic pain may be a limiting factor to performance. No other correlations were found between the factors of participant riding test time, VAS Pain and Fatigue, however a larger sample needs to be tested to improve statistical analysis. The findings suggest the simulator riding test was not sufficient to provoke fatigue in the riders, however foundations for future studies have been laid to enable methodologies in realistic eventing settings.

Keywords: eventing, fatigue, horse-rider, surface EMG, trunk muscles

Procedia PDF Downloads 168
102 The Impact of Trade on Stock Market Integration of Emerging Markets

Authors: Anna M. Pretorius

Abstract:

The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.

Keywords: emerging markets, financial market integration, panel data, trade

Procedia PDF Downloads 276
101 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 77
100 Magnesium Nanoparticles for Photothermal Therapy

Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini

Abstract:

Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.

Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy

Procedia PDF Downloads 245
99 Evaluation of Antibiotic Resistance and Extended-Spectrum β-Lactamases Production Rates of Gram Negative Rods in a University Research and Practice Hospital, 2012-2015

Authors: Recep Kesli, Cengiz Demir, Onur Turkyilmaz, Hayriye Tokay

Abstract:

Objective: Gram-negative rods are a large group of bacteria, and include many families, genera, and species. Most clinical isolates belong to the family Enterobacteriaceae. Resistance due to the production of extended-spectrum β-lactamases (ESBLs) is a difficulty in the handling of Enterobacteriaceae infections, but other mechanisms of resistance are also emerging, leading to multidrug resistance and threatening to create panresistant species. We aimed in this study to evaluate resistance rates of Gram-negative rods bacteria isolated from clinical specimens in Microbiology Laboratory, Afyon Kocatepe University, ANS Research and Practice Hospital, between October 2012 and September 2015. Methods: The Gram-negative rods strains were identified by conventional methods and VITEK 2 automated identification system (bio-Mérieux, Marcy l’etoile, France). Antibiotic resistance tests were performed by both the Kirby-Bauer disk-diffusion and automated Antimicrobial Susceptibility Testing (AST, bio-Mérieux, Marcy l’etoile, France) methods. Disk diffusion results were evaluated according to the standards of Clinical and Laboratory Standards Institute (CLSI). Results: Of the totally isolated 1.701 Enterobacteriaceae strains 1434 (84,3%) were Klebsiella pneumoniae, 171 (10%) were Enterobacter spp., 96 (5.6%) were Proteus spp., and 639 Nonfermenting gram negatives, 477 (74.6%) were identified as Pseudomonas aeruginosa, 135 (21.1%) were Acinetobacter baumannii and 27 (4.3%) were Stenotrophomonas maltophilia. The ESBL positivity rate of the totally studied Enterobacteriaceae group were 30.4%. Antibiotic resistance rates for Klebsiella pneumoniae were as follows: amikacin 30.4%, gentamicin 40.1%, ampicillin-sulbactam 64.5%, cefepime 56.7%, cefoxitin 35.3%, ceftazidime 66.8%, ciprofloxacin 65.2%, ertapenem 22.8%, imipenem 20.5%, meropenem 20.5 %, and trimethoprim-sulfamethoxazole 50.1%, and for 114 Enterobacter spp were detected as; amikacin 26.3%, gentamicin 31.5%, cefepime 26.3%, ceftazidime 61.4%, ciprofloxacin 8.7%, ertapenem 8.7%, imipenem 12.2%, meropenem 12.2%, and trimethoprim-sulfamethoxazole 19.2 %. Resistance rates for Proteus spp. were: 24,3% meropenem, 26.2% imipenem, 20.2% amikacin 10.5% cefepim, 33.3% ciprofloxacin and levofloxacine, 31.6% ceftazidime, 20% ceftriaxone, 15.2% gentamicin, 26.6% amoxicillin-clavulanate, and 26.2% trimethoprim-sulfamethoxale. Resistance rates of P. aeruginosa was found as follows: Amikacin 32%, gentamicin 42 %, imipenem 43%, merpenem 43%, ciprofloxacin 50%, levofloxacin 52%, cefepim 38%, ceftazidim 63%, piperacillin/tacobactam 85%, for Acinetobacter baumannii; Amikacin 53.3%, gentamicin 56.6 %, imipenem 83%, merpenem 86%, ciprofloxacin 100%, ceftazidim 100%, piperacillin/tacobactam 85 %, colisitn 0 %, and for S. malthophilia; levofloxacin 66.6 % and trimethoprim/sulfamethoxozole 0 %. Conclusions: This study showed that resistance in Gram-negative rods was a serious clinical problem in our hospital and suggested the need to perform typification of the isolated bacteria with susceptibility testing regularly in the routine laboratory procedures. This application guided to empirical antibiotic treatment choices truly, as a consequence of the reality that each hospital shows different resistance profiles.

Keywords: antibiotic resistance, gram negative rods, ESBL, VITEK 2

Procedia PDF Downloads 304
98 What We Know About Effective Learning for Pupils with SEN: Results of 2 Systematic Reviews and of a Global Classroom

Authors: Claudia Mertens, Amanda Shufflebarger

Abstract:

Step one: What we know about effective learning for pupils with SEN: results of 2 systematic reviews: Before establishing principles and practices for teaching and learning of pupils with SEN, we need a good overview of the results of empirical studies conducted in the respective field. Therefore, two systematic reviews on the use of digital tools in inclusive and non-inclusive school settings were conducted - taking into consideration studies published in German: One systematic review included studies having undergone a peer review process, and the second included studies without peer review). The results (collaboration of two German universities) will be presented during the conference. Step two: Students’ results of a research lab on “inclusive media education”: On this basis, German students worked on “inclusive media education” in small research projects (duration: 1 year). They were “education majors” enrolled in a course on inclusive media education. They conducted research projects on topics ranging from smartboards in inclusive settings, digital media in gifted math education, Tik Tok in German as a Foreign Language education and many more. As part of their course, the German students created an academic conference poster. In the conference, the results of these research projects/papers are put into the context of the results of the systematic reviews. Step three: Global Classroom: The German students’ posters were critically discussed in a global classroom in cooperation with Indiana University East (USA) and Hamburg University (Germany) in the winter/spring term of 2022/2023. 15 students in Germany collaborated with 15 students at Indiana University East. The IU East student participants were enrolled in “Writing in the Arts and Sciences,” which is specifically designed for pre-service teachers. The joint work began at the beginning of the Spring 2023 semester in January 2023 and continued until the end of the Uni Hamburg semester in February 2023. Before January, Uni Hamburg students had been working on a research project individually or in pairs. Didactic Approach: Both groups of students posted a brief video or audio introduction to a shared Canvas discussion page. In the joint long synchronous session, the students discussed key content terms such as inclusion, inclusive, diversity, etc., with the help of prompt cards, and they compared how they understood or applied these terms differently. Uni Hamburg students presented drafts of academic posters. IU East students gave them specific feedback. After that, IU East students wrote brief reflections summarizing what they learned from the poster. After the class, small groups were expected to create a voice recording reflecting on their experiences. In their recordings, they examined critical incidents, highlighting what they learned from these incidents. Major results of the student research and of the global classroom collaboration can be highlighted during the conference. Results: The aggregated results of the two systematic reviews AND of the research lab/global classroom can now be a sound basis for 1) improving accessibility for students with SEN and 2) for adjusting teaching materials and concepts to the needs of the students with SEN - in order to create successful learning.

Keywords: digitalization, inclusion, inclusive media education, global classroom, systematic review

Procedia PDF Downloads 55
97 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 152
96 Gandhi and the Judicial Discourse on Moral Rights

Authors: Sunayana Basu Mallik, Shishira Prakash

Abstract:

The inclusion of Rights of Author (Moral and Personal Rights) resonate the century long battle of rights of authors, composers, performers across developed and developing countries (whether following civil law or common law systems). But, the juxtaposition of author’s special, moral, personal rights within the legislative framework of Copyright statutes (Indian Copyright Act, 1957, applicable statutes) underscores the foundational role of the right which goes to the root of the constitutional structure of India and philosophies of political and literary leaders like Mahatma Gandhi and Gurudeb Rabindranath Tagore. In the pre-independence era when the concept of moral rights was unknown to both England and India’s statutory laws, the strategic deployment method of Gandhi, his ideologies and thoughts scripted the concept of moral rights for authors/composers. The preservation of Rabindric Style (Characteristic Tagore’s vocal renditions) by Vishwabharati University (successor in interest for Tagore’s literary and musical compositions) prior to the Copyright Amendment of 1999 recognizing Author’s Special Rights in line with 6bis of Berne Convention invigorates the fact that the right existed intrinsically prior to the legislative amendment. The paper would in addition to the academic probe carry out an empirical enquiry of the institution’s (Navjivan Trust and Vishwa Bharati University’s) reasoning on the same. The judicial discourse and transforming constitutional ideals between 1950s till date in India alludes Moral Rights to be an essential legal right which have been reasoned by Indian Courts based on the underlying philosophies in culture, customs, religion wherein composers and literary figures have played key roles in enlightening and encouraging the members of society through their literary, musical and artistic work during pre-independence renaissance of India. The discourses have been influenced by the philosophies reflected in the preamble of the Indian constitution, ‘socialist, secular, democratic republic’ and laws of other civil law countries. Lastly, the paper would analyze the adjudication process and witness involvement in ascertaining violations of moral rights and further summarize the indigenous and country specific economic thoughts that often chisel decisions on moral rights of authors, composers, performers which sometimes intersect with author’s right of privacy and against defamation. The exclusivity contracts or other arrangements between authors, composers and publishing companies not only have an erosive effect on each thread of moral rights but irreparably dents factors that promote creativity. The paper would also be review these arrangements in view of the principles of unjust enrichment, unfair trade practices, anti-competitive behavior and breach of Section 27 (Restrain of Trade) of Indian Contract Act, 1857. The paper will thus lay down the three pillars on which author’s rights in India should namely rest, (a) political and judicial discourse evolving principles supporting moral rights of authors; (b) amendment and insertion of Section 57 of the Copyright Act, 1957; (c) overall constitutional framework supporting author’s rights.

Keywords: copyright, moral rights, performer’s rights, personal rights

Procedia PDF Downloads 171
95 Synaesthetic Metaphors in Persian: a Cognitive Corpus Based and Comparative Perspective

Authors: A. Afrashi

Abstract:

Introduction: Synaesthesia is a term denoting the perception or description of the perception of one sense modality in terms of another. In literature, synaesthesia refers to a technique adopted by writers to present ideas, characters or places in such a manner that they appeal to more than one sense like hearing, seeing, smell etc. at a given time. In everyday language too we find many examples of synaesthesia. We commonly hear phrases like ‘loud colors’, ‘frozen silence’ and ‘warm colors’, ‘bitter cold’ etc. Empirical cognitive studies have proved that synaesthetic representations both in literature and everyday languages are constrained ie. they do not map randomly among sensory domains. From the beginning of the 20th century Synaesthesia has been a research domain both in literature and structural linguistics. However the exploration of cognitive mechanisms motivating synaesthesia, have made it an important topic in 21st century cognitive linguistics and literary studies. Synaesthetic metaphors are linguistic representations of those mental mechanisms, the study of which reveals invaluable facts about perception, cognition and conceptualization. According to the main tenets of cognitive approach to language and literature, unified and similar cognitive mechanisms are active both in everyday language and literature, and synaesthesia is one of those cognitive mechanisms. Main objective of the present research is to answer the following questions: What types of sense transfers are accessible in Persian synaesthetic metaphors. How are these types of sense transfers cognitively explained. What are the results of cross-linguistic comparative study of synaestetic metaphors based on the existing observations? Methodology: The present research employs a cognitive - corpus based method, and the theoretical framework adopted to analyze linguistic synaesthesia is the contemporary theory of metaphor, where conceptual metaphor is the result of systemic mappings across cognitive domains. Persian Language Data- base (PLDB) in the Institute for Humanities and Cultural Studies which consists mainly of Persian modern prose, is searched for synaesthetic metaphors. Then for each metaphorical structure, the source and target domains are determined. Then sense transfers are identified and the types of synaesthetic metaphors recognized. Findings: Persian synaesthetic metaphors conform to the hierarchical distribution principle, according to which transfers tend to go from touch to taste to smell to sound and to sight, not vice versa. In other words mapping from more accessible or basic concepts onto less accessible or less basic ones seems more natural. Furthermore the most frequent target domain in Persian synaesthetic metaphors is sound. Certain characteristics of Persian synaesthetic metaphors are comparable with existing related researches carried on English, French, Hungarian and Chinese synaesthetic metaphors. Conclusion: Cognitive corpus based approaches to linguistic synaesthesia, are applicable to stylistics and literary criticism and this recent research domain is an efficient approach to study cross linguistic variations to find out which of the five senses is dominant cross linguistically and cross culturally as the target domain in metaphorical mappings , and so forth receiving dominance in conceptualizations.

Keywords: cognitive semantics, conceptual metaphor, synaesthesia, corpus based approach

Procedia PDF Downloads 539
94 Becoming Vegan: The Theory of Planned Behavior and the Moderating Effect of Gender

Authors: Estela Díaz

Abstract:

This article aims to make three contributions. First, build on the literature on ethical decision-making literature by exploring factors that influence the intention of adopting veganism. Second, study the superiority of extended models of the Theory of Planned Behavior (TPB) for understanding the process involved in forming the intention of adopting veganism. Third, analyze the moderating effect of gender on TPB given that attitudes and behavior towards animals are gender-sensitive. No study, to our knowledge, has examined these questions. Veganism is not a diet but a political and moral stand that exclude, for moral reasons, the use of animals. Although there is a growing interest in studying veganism, it continues being overlooked in empirical research, especially within the domain of social psychology. TPB has been widely used to study a broad range of human behaviors, including moral issues. Nonetheless, TPB has rarely been applied to examine ethical decisions about animals and, even less, to veganism. Hence, the validity of TPB in predicting the intention of adopting veganism remains unanswered. A total of 476 non-vegan Spanish university students (55.6% female; the mean age was 23.26 years, SD= 6.1) responded to online and pencil-and-paper self-reported questionnaire based on previous studies. TPB extended models incorporated two background factors: ‘general attitudes towards humanlike-attributes ascribed to animals’ (AHA) (capacity for reason/emotions/suffer, moral consideration, and affect-towards-animals); and ‘general attitudes towards 11 uses of animals’ (AUA). SPSS 22 and SmartPLS 3.0 were used for statistical analyses. This study constructed a second-order reflective-formative model and took the multi-group analysis (MGA) approach to study gender effects. Six models of TPB (the standard and five competing) were tested. No a priori hypotheses were formulated. The results gave partial support to TPB. Attitudes (ATTV) (β = .207, p < .001), subjective norms (SNV) (β = .323, p < .001), and perceived control behavior (PCB) (β = .149, p < .001) had a significant direct effect on intentions (INTV). This model accounted for 27,9% of the variance in intention (R2Adj = .275) and had a small predictive relevance (Q2 = .261). However, findings from this study reveal that contrary to what TPB generally proposes, the effect of the background factors on intentions was not fully mediated by the proximal constructs of intentions. For instance, in the final model (Model#6), both factors had significant multiple indirect effect on INTV (β = .074, 95% C = .030, .126 [AHA:INTV]; β = .101, 95% C = .055, .155 [AUA:INTV]) and significant direct effect on INTV (β = .175, p < .001 [AHA:INTV]; β = .100, p = .003 [AUA:INTV]). Furthermore, the addition of direct paths from background factors to intentions improved the explained variance in intention (R2 = .324; R2Adj = .317) and the predictive relevance (Q2 = .300) over the base-model. This supports existing literature on the superiority of enhanced TPB models to predict ethical issues; which suggests that moral behavior may add additional complexity to decision-making. Regarding gender effect, MGA showed that gender only moderated the influence of AHA on ATTV (e.g., βWomen−βMen = .296, p < .001 [Model #6]). However, other observed gender differences (e.g. the explained variance of the model for intentions were always higher for men that for women, for instance, R2Women = .298; R2Men = .394 [Model #6]) deserve further considerations, especially for developing more effective communication strategies.

Keywords: veganism, Theory of Planned Behavior, background factors, gender moderation

Procedia PDF Downloads 318
93 Analysing the Influence of COVID-19 on Major Agricultural Commodity Prices in South Africa

Authors: D. Mokatsanyane, J. Jansen Van Rensburg

Abstract:

This paper analyses the influence and impact of COVID-19 on major agricultural commodity prices in South Africa. According to a World Bank report, the agricultural sector in South Africa has been unable to reduce the domestic food crisis that has been occurring over the past years, hence the increased rate of poverty, which is currently at 55.5 percent as of April 2020. Despite the significance of this sector, empirical findings concluded that the agricultural sector now accounts for 1.88 percent of South Africa's gross domestic product (GDP). Suggesting that the agricultural sector's contribution to the economy has diminished. Despite the low contribution to GDP, this primary sector continues to play an essential role in the economy. Over the past years, multiple factors have contributed to the soaring commodities prices, namely, climate shocks, biofuel demand, demand and supply shocks, the exchange rate, speculation in commodity derivative markets, trade restrictions, and economic growth. The COVID-19 outbursts have currently disturbed the supply and demand of staple crops. To address the disruption, the government has exempted the agricultural sector from closure and restrictions on movement. The spread of COVID-19 has caused turmoil all around the world, but mostly in developing countries. According to Statistic South Africa, South Africa's economy decreased by seven percent in 2020. Consequently, this has arguably made the agricultural sector the most affected sector since slumped economic growth negatively impacts food security, trade, farm livelihood, and greenhouse gas emissions. South Africa is sensitive to the fruitfulness of global food chains. Restrictions in trade, reinforced sanitary control systems, and border controls have influenced food availability and prices internationally. The main objective of this study is to evaluate the behavior of agricultural commodity prices pre-and during-COVID to determine the impact of volatility drivers on these crops. Historical secondary data of spot prices for the top five major commodities, namely white maize, yellow maize, wheat, soybeans, and sunflower seeds, are analysed from 01 January 2017 to 1 September 2021. The timeframe was chosen to capture price fluctuations between pre-COVID-19 (01 January 2017 to 23 March 2020) and during-COVID-19 (24 March 2020 to 01 September 2021). The Generalised Autoregressive Conditional Heteroscedasticity (GARCH) statistical model will be used to measure the influence of price fluctuations. The results reveal that the commodity market has been experiencing volatility at different points. Extremely high volatility is represented during the first quarter of 2020. During this period, there was high uncertainty, and grain prices were very volatile. Despite the influence of COVID-19 on agricultural prices, the demand for these commodities is still existing and decent. During COVID-19, analysis indicates that prices were low and less volatile during the pandemic. The prices and returns of these commodities were low during COVID-19 because of the government's actions to respond to the virus's spread, which collapsed the market demand for food commodities.

Keywords: commodities market, commodity prices, generalised autoregressive conditional heteroscedasticity (GARCH), Price volatility, SAFEX

Procedia PDF Downloads 142
92 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 50
91 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 527
90 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 135
89 Escaping Domestic Violence in Time of Conflict: The Ways Female Refugees Decide to Flee

Authors: Zofia Wlodarczyk

Abstract:

I study the experiences of domestic violence survivors who flee their countries of origin in times of political conflict using insight and evidence from forty-five biographical interviews with female Chechen refugees and twelve refugee resettlement professionals in Poland. Both refugees and women are often described as having less agency—that is, they lack the power to decide to migrate – refugees less than economic migrants and women less than men. In this paper, I focus on how female refugees who have been victims of domestic violence make decisions about leaving their countries of origin during times of political conflict. I use several existing migration theories to trace how the migration experience of these women is shaped by dynamics at different levels of society: the macro level, the meso level and the micro level. At the macro level of analysis, I find that political conflict can be both a source of and an escape from domestic violence. Ongoing conflict can strengthen the patriarchal cultural norms, increase violence and constrain women’s choices when it comes to marriage. However, political conflict can also destabilize families and make pathways for women to escape. At the meso level I demonstrate that other political migrants and institutions that emerge due to politically triggered migration can guide those fleeing domestic violence. Finally, at the micro level, I show that family dynamics often force domestic abuse survivors to make their decision to escape alone or with the support of only the most trusted female relatives. Taken together, my analyses show that we cannot look solely at one level of society when describing decision-making processes of women fleeing domestic violence. Conflict-related micro, meso and macro forces interact with and influence each other: on the one hand, strengthening an abusive trap, and on the other hand, opening a door to escape. This study builds upon several theoretical and empirical debates. First, it expands theories of migration by incorporating both refugee and gender perspectives. Few social scientists have used the migration theory framework to discuss the unique circumstances of refugee flows. Those who have mainly focus on “political” migrants, a designation that frequently fails to account for gender, does not incorporate individuals fleeing gender-based violence, including domestic-violence victims. The study also enriches migration scholarship, typically focused on the US and Western-European context, with research from Eastern Europe and Caucasus. Moreover, it contributes to the literature on the changing roles of gender in the context of migration. I argue that understanding how gender roles and hierarchies influence the pre-migration stage of female refugees is crucial, as it may have implications for policy-making efforts in host countries that recognize the asylum claims of those fleeing domestic violence. This study also engages in debates about asylum and refugee law. Domestic violence is normatively and often legally considered an individual-level problem whereas political persecution is recognized as a structural or societal level issue. My study challenges these notions by showing how the migration triggered by domestic violence is closely intertwined with politically motivated refuge.

Keywords: AGENCY, DOMESTIC VIOLENCE, FEMALE REFUGEES, POLITICAL REFUGE, SOCIAL NETWORKS

Procedia PDF Downloads 144
88 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 124
87 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression

Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac

Abstract:

The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.

Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi

Procedia PDF Downloads 172
86 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 139
85 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 288
84 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences

Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter

Abstract:

For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.

Keywords: cognitive work, office lay-out, work location, work-related flow

Procedia PDF Downloads 66
83 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 223
82 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 26
81 The Measurement of City Brand Effectiveness as Methodological and Strategic Challenge: Insights from Individual Interviews with International Experts

Authors: A. Augustyn, M. Florek, M. Herezniak

Abstract:

Since the public authorities are constantly pressured by the public opinion to showcase the tangible and measurable results of their efforts, the evaluation of place brand-related activities becomes a necessity. Given the political and social character of place branding process, the legitimization of the branding efforts requires the compliance of the objectives set out in the city brand strategy with the actual needs, expectations, and aspirations of various internal stakeholders. To deliver on the diverse promises, city authorities and brand managers need to translate them into the measurable indicators against which the brand strategy effectiveness will be evaluated. In concert with these observations are the findings from branding and marketing literature with a widespread consensus that places should adopt a more systematic and holistic approach in order to ensure the performance of their brands. However, the measurement of the effectiveness of place branding remains insufficiently explored in theory, even though it is considered a significant step in the process of place brand management. Therefore, the aim of the research presented in the current paper was to collect insights on the nature of effectiveness measurement of city brand strategies and to juxtapose these findings with the theoretical assumptions formed on the basis of the state-of-the-art literature review. To this end, 15 international academic experts (out of 18 initially selected) with affiliation from ten countries (five continents), were individually interviewed. The standardized set of 19 open-ended questions was used for all the interviewees, who had been selected based on their expertise and reputation in the fields of place branding/marketing. Findings were categorized into four modules: (i) conceptualizations of city brand effectiveness, (ii) methodological issues of city brand effectiveness measurement, (iii) the nature of measurement process, (iv) articulation of key performance indicators (KPIs). Within each module, the interviewees offered diverse insights into the subject based on their academic expertise and professional activity as consultants. They proposed that there should be a twofold understanding of effectiveness. The narrow one when it is conceived as the aptitude to achieve specific goals, and the broad one in which city brand effectiveness is seen as an increase in social and economic reality of a place, which in turn poses diverse challenges for the measurement concepts and processes. Moreover, the respondents offered a variety of insights into the methodological issues, particularly about the need for customization and flexibility of the measurement systems, for the employment of interdisciplinary approach to measurement and implications resulting therefrom. Considerable emphasis was put on the inward approach to measurement, namely the necessity to monitor the resident’s evaluation of brand related activities instead of benchmarking cities against the competitive set. Other findings encompass the issues of developing appropriate KPIs for the city brand, managing the measurement process and the inclusion of diverse stakeholders to produce a sound measurement system. Furthermore, the interviewees enumerated the most frequently made mistakes in measurement mainly resulting from the misunderstanding of the nature of city brands. This research was financed by the National Science Centre, Poland, research project no. 2015/19/B/HS4/00380 Towards the categorization of place brand strategy effectiveness indicators – findings from strategic documents of Polish district cities – theoretical and empirical approach.

Keywords: city branding, effectiveness, experts’ insights, measurement

Procedia PDF Downloads 119
80 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 138
79 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies

Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon

Abstract:

In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learning

Keywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps

Procedia PDF Downloads 101
78 Promoting Resilience in Adolescents: Integrating Adolescent Medicine and Child Psychology Perspectives

Authors: Xu Qian

Abstract:

This abstract examines the concept of resilience in adolescents from both adolescent medicine and child psychology perspectives. It discusses the role of healthcare providers in fostering resilience among adolescents, encompassing physical, psychological, and social aspects. The paper highlights evidence-based interventions and practical strategies for promoting resilience in this population. Introduction: Resilience plays a crucial role in the healthy development of adolescents, enabling them to navigate through the challenges of this transitional period. This abstract explores the concept of resilience from the perspectives of adolescent medicine and child psychology, shedding light on the collective efforts of healthcare providers in fostering resilience. By integrating the principles and practices of these two disciplines, this abstract emphasizes the multidimensional nature of resilience and its significance in the overall well-being of adolescents. Methods: A comprehensive literature review was conducted, encompassing research articles, empirical studies, and expert opinions from both adolescent medicine and child psychology fields. The search included databases such as PubMed, PsycINFO, and Google Scholar, focusing on publications from the past decade. The review aimed to identify evidence-based interventions and practical strategies employed by healthcare providers to promote resilience among adolescents. Results: The review revealed several key findings regarding the promotion of resilience in adolescents. Firstly, resilience is a dynamic process influenced by individual characteristics, environmental factors, and the interaction between the two. Secondly, healthcare providers play a critical role in fostering resilience by addressing the physical, psychological, and social needs of adolescents. This entails comprehensive healthcare services that integrate medical care, mental health support, and social interventions. Thirdly, evidence-based interventions such as cognitive-behavioral therapy, social skills training, and positive youth development programs have shown promising outcomes in enhancing resilience. Discussion: The integration of adolescent medicine and child psychology perspectives provides a comprehensive framework for promoting resilience in adolescents. By acknowledging the interplay between physical health, psychological well-being, and social functioning, healthcare providers can tailor interventions to address the specific needs and challenges faced by adolescents. Collaborative efforts between medical professionals, psychologists, educators, and families are vital in creating a supportive environment that fosters resilience. Additionally, the findings highlight the importance of early identification and intervention, emphasizing the need for routine screening and assessment to identify adolescents at risk and provide timely support. Conclusion: Promoting resilience in adolescents requires a holistic approach that integrates adolescent medicine and child psychology perspectives. By recognizing the multifaceted nature of resilience, healthcare providers can implement evidence-based interventions and practical strategies to enhance the well-being of adolescents. The collaboration between healthcare professionals from different disciplines, alongside the involvement of families and communities, is crucial for creating a resilient support system. By investing in the promotion of resilience during adolescence, we can empower young individuals to overcome adversity and thrive in their journey toward adulthood.

Keywords: psychology, clinical psychology, child psychology, adolescent psychology, adolescent

Procedia PDF Downloads 50