Search results for: common bile duct exploration
307 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 76306 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research
Authors: Edvard P. G. Bruun
Abstract:
One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research
Procedia PDF Downloads 236305 Modern Architecture and the Scientific World Conception
Authors: Sean Griffiths
Abstract:
Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle
Procedia PDF Downloads 87304 Trends of Conservation and Development in Mexican Biosphere Reserves: Spatial Analysis and Linear Mixed Model
Authors: Cecilia Sosa, Fernanda Figueroa, Leonardo Calzada
Abstract:
Biosphere reserves (BR) are considered as the main strategy for biodiversity and ecosystems conservation. Mexican BR are mainly inhabited by rural communities who strongly depend on forests and their resources. Even though the dual objective of conservation and development has been sought in BR, land cover change is a common process in these areas, while most rural communities are highly marginalized, partly as a result of restrictions imposed by conservation to the access and use of resources. Achieving ecosystems conservation and social development face serious challenges. Factors such as financial support for development projects (public/private), environmental conditions, infrastructure and regional economic conditions might influence both land use change and wellbeing. Examining the temporal trends of conservation and development in BR is central for the evaluation of outcomes for these conservation strategies. In this study, we analyzed changes in primary vegetation cover (as a proxy for conservation) and the index of marginalization (as a proxy for development) in Mexican BR (2000-2015); we also explore the influence of various factors affecting these trends, such as conservation-development projects financial support (public or private), geographical distribution in ecoregions (as a proxy for shared environmental conditions) and in economic zones (as a proxy for regional economic conditions). We developed a spatial analysis at the municipal scale (2,458 municipalities nationwide) in ArcGIS, to obtain road densities, geographical distribution in ecoregions and economic zones, the financial support received, and the percent of municipality area under protection by protected areas and, particularly, by BR. Those municipalities with less than 25% of area under protection were regarded as part of the protected area. We obtained marginalization indexes for all municipalities and, using MODIS in Google Earth Engine, the number of pixels covered by primary vegetation. We used a linear mixed model in RStudio for the analysis. We found a positive correlation between the marginalization index and the percent of primary vegetation cover per year (r=0.49-0.5); i.e., municipalities with higher marginalization also show higher percent of primary vegetation cover. Also, those municipalities with higher area under protection have more development projects (r=0.46) and some environmental conditions were relevant for percent of vegetation cover. Time, economic zones and marginalization index were all important. Time was particularly, in 2005, when both marginalization and deforestation decreased. Road densities and financial support for conservation-development projects were irrelevant as factors in the general correlation. Marginalization is still being affected by the conservation strategies applied in BR, even though that this management category considers both conservation and development of local communities as its objectives. Our results suggest that roads densities and support for conservation-development projects have not been a factor of poverty alleviation. As better conservation is being attained in the most impoverished areas, we face the dilemma of how to improve wellbeing in rural communities under conservation, since current strategies have not been able to leave behind the conservation-development contraposition.Keywords: deforestation, local development, marginalization, protected areas
Procedia PDF Downloads 137303 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis
Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos
Abstract:
Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent
Procedia PDF Downloads 279302 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 68301 Biological Monitoring: Vegetation Cover, Bird Assemblages, Rodents, Terrestrial and Aquatic Invertebrates from a Closed Landfill
Authors: A. Cittadino, P. Gantes, C. Coviella, M. Casset, A. Sanchez Caro
Abstract:
Three currently active landfills receive the waste from Buenos Aires city and the Great Buenos Aires suburbs. One of the first landfills to receive solid waste from this area was located in Villa Dominico, some 7 km south from Buenos Aires City. With an area of some 750 ha, including riparian habitats, divided into 14 cells, it received solid wastes from June 1979 through February 2004. In December 2010, a biological monitoring program was set up by CEAMSE and Universidad Nacional de Lujan, still operational to date. The aim of the monitoring program is to assess the state of several biological groups within the landfill and to follow their dynamics overtime in order to identify if any, early signs of damage the landfill activities might have over the biota present. Bird and rodent populations, aquatic and terrestrial invertebrates’ populations, cells vegetation coverage, and surrounding areas vegetation coverage and main composition are followed by quarterly samplings. Bird species richness and abundance were estimated by observation over walk transects on each environment. A total of 74 different species of birds were identified. Species richness and diversity were high for both riparian surrounding areas and within the landfill. Several grassland -typical of the 'Pampa'- bird species were found within the landfill, as well as some migratory and endangered bird species. Sherman and Tomahawk traps are set overnight for small mammal sampling. Rodent populations are just above detection limits, and the few specimens captured belong mainly to species common to rural areas, instead of city-dwelling species. The two marsupial species present in the region were captured on occasions. Aquatic macroinvertebrates were sampled on a watercourse upstream and downstream the outlet of the landfill’s wastewater treatment plant and are used to follow water quality using biological indices. Water quality ranged between weak and severe pollution; benthic invertebrates sampled before and after the landfill, show no significant differences in water quality using the IBMWP index. Insect biota from yellow sticky cards and pitfall traps showed over 90 different morphospecies, with Shannon diversity index running from 1.9 to 3.9, strongly affected by the season. An easy-to-perform non-expert demandant method was used to assess vegetation coverage. Two scales of determination are utilized: field observation (1 m resolution), and Google Earth images (that allow for a better than 5 m resolution). Over the eight year period of the study, vegetation coverage over the landfill cells run from a low 83% to 100% on different cells, with an average between 95 to 99% for the entire landfill depending on seasonality. Surrounding area vegetation showed almost 100% coverage during the entire period, with an average density from 2 to 6 species per sq meter and no signs of leachate damaged vegetation.Keywords: biological indicators, biota monitoring, landfill species diversity, waste management
Procedia PDF Downloads 140300 Criticality of Socio-Cultural Factors in Public Policy: A Study of Reproductive Health Care in Rural West Bengal
Authors: Arindam Roy
Abstract:
Public policy is an intriguing terrain, which involves complex interplay of administrative, social political and economic components. There is hardly any fit-for all formulation of public policy as Lindbloom has aptly categorized it as a science of muddling through. In fact, policies are both temporally and contextually determined as one the proponents of policy sciences Harold D Lasswell has underscored it in his ‘contextual-configurative analysis’ as early as 1950s. Though, a lot of theoretical efforts have been made to make sense of this intricate dynamics of policy making, at the end of the day the applied area of public policy negates any such uniform, planned and systematic formulation. However, our policy makers seem to have learnt very little of that. Until recently, policy making was deemed as an absolutely specialized exercise to be conducted by a cadre of professionally trained seasoned mandarin. Attributes like homogeneity, impartiality, efficiency, and neutrality were considered as the watchwords of delivering common goods. Citizen or clientele was conceptualized as universal political or economic construct, to be taken care of uniformly. Moreover, policy makers usually have the proclivity to put anything into straightjacket, and to ignore the nuances therein. Hence, least attention has been given to the ground level reality, especially the socio-cultural milieu where the policy is supposed to be applied. Consequently, a substantial amount of public money goes in vain as the intended beneficiaries remain indifferent to the delivery of public policies. The present paper in the light of Reproductive Health Care policy in rural West Bengal has tried to underscore the criticality of socio-cultural factors in public health delivery. Indian health sector has traversed a long way. From a near non-existent at the time of independence, the Indian state has gradually built a country-wide network of health infrastructure. Yet it has to make a major breakthrough in terms of coverage and penetration of the health services in the rural areas. Several factors are held responsible for such state of things. These include lack of proper infrastructure, medicine, communication, ambulatory services, doctors, nursing services and trained birth attendants. Policy makers have underlined the importance of supply side in policy formulation and implementation. The successive policy documents concerning health delivery bear the testimony of it. The present paper seeks to interrogate the supply-side oriented explanations for the failure of the delivery of health services. Instead, it identified demand side to find out the answer. The state-led and bureaucratically engineered public health measures fail to engender demands as these measures mostly ignore socio-cultural nuances of health and well-being. Hence, the hiatus between supply side and demand side leads to huge wastage of revenue as health infrastructure, medicine and instruments remain unutilized in most cases. Therefore, taking proper cognizance of these factors could have streamlined the delivery of public health.Keywords: context, policy, socio-cultural factor, uniformity
Procedia PDF Downloads 317299 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 251298 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics
Authors: Michael Wendland
Abstract:
This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment
Procedia PDF Downloads 67297 Human Facial Emotion: A Comparative and Evolutionary Perspective Using a Canine Model
Authors: Catia Correia Caeiro, Kun Guo, Daniel Mills
Abstract:
Despite its growing interest, emotions are still an understudied cognitive process and their origins are currently the focus of much debate among the scientific community. The use of facial expressions as traditional hallmarks of discrete and holistic emotions created a circular reasoning due to a priori assumptions of meaning and its associated appearance-biases. Ekman and colleagues solved this problem and laid the foundations for the quantitative and systematic study of facial expressions in humans by developing an anatomically-based system (independent from meaning) to measure facial behaviour, the Facial Action Coding System (FACS). One way of investigating emotion cognition processes is by applying comparative psychology methodologies and looking at either closely-related species (e.g. chimpanzees) or phylogenetically distant species sharing similar present adaptation problems (analogy). In this study, the domestic dog was used as a comparative animal model to look at facial expressions in social interactions in parallel with human facial expressions. The orofacial musculature seems to be relatively well conserved across mammal species and the same holds true for the domestic dog. Furthermore, the dog is unique in having shared the same social environment as humans for more than 10,000 years, facing similar challenges and acquiring a unique set of socio-cognitive skills in the process. In this study, the spontaneous facial movements of humans and dogs were compared when interacting with hetero- and conspecifics as well as in solitary contexts. In total, 200 participants were examined with FACS and DogFACS (The Dog Facial Action Coding System): coding tools across four different emotionally-driven contexts: a) Happiness (play and reunion), b) anticipation (of positive reward), c) fear (object or situation triggered), and d) frustration (negation of a resource). A neutral control was added for both species. All four contexts are commonly encountered by humans and dogs, are comparable between species and seem to give rise to emotions from homologous brain systems. The videos used in the study were extracted from public databases (e.g. Youtube) or published scientific databases (e.g. AM-FED). The results obtained allowed us to delineate clear similarities and differences on the flexibility of the facial musculature in the two species. More importantly, they shed light on what common facial movements are a product of the emotion linked contexts (the ones appearing in both species) and which are characteristic of the species, revealing an important clue for the debate on the origin of emotions. Additionally, we were able to examine movements that might have emerged for interspecific communication. Finally, our results are discussed from an evolutionary perspective adding to the recent line of work that supports an ancient shared origin of emotions in a mammal ancestor and defining emotions as mechanisms with a clear adaptive purpose essential on numerous situations, ranging from maintenance of social bonds to fitness and survival modulators.Keywords: comparative and evolutionary psychology, emotion, facial expressions, FACS
Procedia PDF Downloads 434296 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications
Authors: Swati Mishra
Abstract:
In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy
Procedia PDF Downloads 149295 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility
Authors: Claudio Di Benedetto, Irene Bengo
Abstract:
The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise
Procedia PDF Downloads 227294 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction
Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz
Abstract:
Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm:293 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution
Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos
Abstract:
Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.Keywords: first responders, safety, civil protection, new technologies
Procedia PDF Downloads 143292 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats
Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina
Abstract:
Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog
Procedia PDF Downloads 70291 The Impact of Professional Development on Teachers’ Instructional Practice
Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier
Abstract:
Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.Keywords: teacher learning, professional development, pedagogical content knowledge, geometry
Procedia PDF Downloads 170290 Evaluation of Herbal Extracts for Their Potential Application as Skin Prebiotics
Authors: Anja I. Petrov, Milica B. Veljković, Marija M. Ćorović, Ana D. Milivojević, Milica B. Simović, Katarina M. Banjanac, Dejan I. Bezbradica
Abstract:
One of the fundamental requirements for overall human well-being is a stable and balanced microbiome. Aside from the microorganisms that reside within the body, a large number of microorganisms, especially bacteria, swarming the human skin is in homeostasis with the host and represents a skin microbiota. Even though the immune system of the skin is capable of distinguishing between commensal and potentially harmful transient bacteria, the cutaneous microbial balance can be disrupted under certain circumstances. In that case, a reduction in the skin microbiota diversity, as well as changes in metabolic activity, results in dermal infections and inflammation. Probiotics and prebiotics have the potential to play a significant role in the treatment of these skin disorders. The most common resident bacteria found on the skin, Staphylococcus epidermidis, can act as a potential skin probiotic, contributing to the protection of healthy skin from pathogen colonization, such as Staphylococcus aureus, which is related to atopic dermatitis exacerbation. However, as it is difficult to meet regulations in cosmetic products, another therapy approach could be topical prebiotic supplementation of the skin microbiota. In recent research, polyphenols are attracting scientists' interest as biomolecules with possible prebiotic effects on the skin microbiota. This research aimed to determine how herbal extracts rich in different polyphenolic compounds (lemon balm, St. John's wort, coltsfoot, pine needle, and yarrow) affected the growth of S. epidermidis and S. aureus. The first part of the study involved screening plants to determine if they could be regarded as probable candidates to be skin prebiotics. The effect of each plant on bacterial growth was examined by supplementing the nutrient medium with their extracts and comparing it with control samples (without extract). The results obtained after 24 h of incubation showed that all tested extracts influenced the growth of the examined bacteria to some extent. Since lemon balm and St. John's wort extracts displayed bactericidal activity against S. epidermidis, whereas coltsfoot inhibited both bacteria equally, they were not explored further. On the other hand, pine needles and yarrow extract led to an increase in S. epidermidis/S. aureus ratio, making them prospective candidates to be used as skin prebiotics. By examining the prebiotic effect of two extracts at different concentrations, it was revealed that, in the case of yarrow, 0.1% of extract dry matter in the fermentation medium was optimal, while for the pine needle extract, a concentration of 0.05% was preferred, since it selectively stimulated S. epidermidis growth and inhibited S. aureus proliferation. Additionally, the total polyphenols and flavonoid content of the two extracts were determined, revealing different concentrations and polyphenol profiles. Since yarrow and pine extracts affected the growth of skin bacteria in a dose-dependent manner, by carefully selecting the quantities of these extracts, and thus polyphenols content, it is possible to achieve desirable alterations of skin microbiota composition, which may be suitable for the treatment of atopic dermatitis.Keywords: herbal extracts, polyphenols, skin microbiota, skin prebiotics
Procedia PDF Downloads 175289 Biocompatible Hydrogel Materials Containing Cytostatics for Cancer Treatment
Authors: S. Kudlacik-Kramarczyk, M. Kedzierska, B. Tyliszczak
Abstract:
Recently, the continuous development of medicine and related sciences has been observed. Particular emphasis is directed on the development of biomaterials, i.e., non-toxic, biocompatible and biodegradable materials that may improve the effectiveness of treatment as well as the comfort of patients. This is particularly important in the case of cancer treatment. Currently, there are many methods of cancer treatment based primarily on chemotherapy and the surgical removal of the tumor, but it is worth noting that these therapies also cause many side effects. Among women, the most common cancer is breast cancer. It may be completely cured, but the consequence of treatment is partial or complete breast mastectomy and radiation therapy, which results in severe skin burns. The skin of the patient after radiation therapy is very burned, and therefore requires intensive care and high frequency of dressing changes. The traditional dressing adheres to the burn wounds and does not absorb adequate amount of exudate from injuries and the patient is forced to change the dressing every 2 hours. Therefore, the main purpose was to develop an innovative combination of dressing material with drug carriers that may be used in anti-cancer therapy. The innovation of this solution is the combination of these two products into one system, i.e., a transdermal system with the possibility of a controlled release of the drug- cytostatic. Besides, the possibility of modifying the hydrogel matrix with aloe vera juice provides this material with new features favorable from the point of view of healing processes of burn wounds resulting from the radiation therapy. In this study, hydrogel materials containing protein spheres with the active substance have been obtained as a result of photopolymerization process. The reaction mixture consisting of the protein (albumin) spheres incorporated with cytostatic, chitosan, adequate crosslinking agent and photoinitiator has been subjected to the UV radiation for 2 minutes. Prepared materials have been subjected to the numerous studies including the analysis of cytotoxicity using murine fibroblasts L929. Analysis was conducted based on the mitochondrial activity test (MTT reduction assay) which involves the determining the number of cells characterized by proper metabolism. Hydrogel materials obtained using different amount of crosslinking agents have been subjected to the cytotoxicity analysis. According to the standards, tested material is defined as cytotoxic when the viability of cells after 24 h incubation with this material is lower than 70%. In the research, hydrogel polymer materials containing protein spheres incorporated with the active substance, i.e. a cytostatic, have been developed. Such a dressing may support the treatment of cancer due to the content of the anti-cancer drug - cytostatic, and may also provide a soothing effect on the healing of the burn wounds resulted from the radiation therapy due to the content of aloe vera juice in the hydrogel matrix. Based on the conducted cytotoxicity studies, it may be concluded that the obtained materials do not adversely affect the tested cell lines, therefore they can be subjected to more advanced analyzes.Keywords: hydrogel polymers, cytostatics, drug carriers, cytotoxicity
Procedia PDF Downloads 133288 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 153287 Is Brain Death Reversal Possible in Near Future: Intrathecal Sodium Nitroprusside (SNP) Superfusion in Brain Death Patients=The 10,000 Fold Effect
Authors: Vinod Kumar Tewari, Mazhar Husain, Hari Kishan Das Gupta
Abstract:
Background: Primary or secondary brain death is also accompanied with vasospasm of the perforators other than tissue disruption & further exaggerates the anoxic damage, in the form of neuropraxia. In normal conditions the excitatory impulse propagates as anterograde neurotransmission (ANT) and at the level of synapse, glutamate activates NMDA receptors on postsynaptic membrane. Nitric oxide (NO) is produced by Nitric oxide Synthetase (NOS) in postsynaptic dendride or cell body and travels backwards across a chemical synapse to bind to the axon terminal of a presynaptic neuron for regulation of ANT this process is called as the retrograde neurotransmission (RNT). Thus the primary function of NO is RNT and the purpose of RNT is regulation of chemical neurotransmission at synapse. For this reason, RNT allows neural circuits to create feedback loops. The haem is the ligand binding site of NO receptor (sGC) at presynaptic membrane. The affinity of haem exhibits > 10,000-fold excess for NO than Oxygen (THE 10,000 FOLD EFFECT). In pathological conditions ANT, normal synaptic activity including RNT is absent. NO donors like sodium nitroprusside (SNP) releases NO by activating NOS at the level of postsynaptic area. NO now travels backwards across a chemical synapse to bind to the haem of NO receptor at axon terminal of a presynaptic neuron as in normal condition. NO now acts as impulse generator (at presynaptic membrane) thus bypasses the normal ANT. Also the arteriolar perforators are having Nitric Oxide Synthetase (NOS) at the adventitial side (outer border) on which sodium nitroprusside (SNP) acts; causing release of Nitric Oxide (NO) which vasodilates the perforators causing gush of blood in brain’s tissue and reversal of brain death. Objective: In brain death cases we only think for various transplantations but this study being a pilot study reverses some criteria of brain death by vasodilating the arteriolar perforators. To study the effect of intrathecal sodium nitroprusside (IT SNP) in cases of brain death in which: 1. Retrograde transmission = assessed by the hyperacute timings of reversal 2. The arteriolar perforator vasodilatation caused by NO and the maintenance of reversal of brain death reversal. Methods: 35 year old male, who became brain death after head injury and has not shown any signs of improvement after every maneuver for 6 hours, a single superfusion done by SNP via transoptic canal route for quadrigeminal cistern and cisternal puncture for IV ventricular with SNP done. Results: He showed spontaneous respiration (7 bouts) with TCD studies showing start of pulsations of various branches of common carotid arteries. Conclusions: In future we can give this SNP via transoptic canal route and in IV ventricle before declaring the body to be utilized for transplantations or dead or in broader way we can say that in near future it is possible to revert back from brain death or we have to modify our criterion.Keywords: brain death, intrathecal sodium nitroprusside, TCD studies, perforators, vasodilatations, retrograde transmission, 10, 000 fold effect
Procedia PDF Downloads 405286 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India
Authors: Shubhangi Roy
Abstract:
Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms
Procedia PDF Downloads 166285 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media
Authors: Sharmila Parmanand
Abstract:
This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.Keywords: discourse analysis, news media, sex work, trafficking
Procedia PDF Downloads 396284 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization
Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo
Abstract:
Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy
Procedia PDF Downloads 148283 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 272282 Neonatology Clinical Routine in Cats and Dogs: Cases, Main Conditions and Mortality
Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, João C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado
Abstract:
The neonatal care of cats and dogs represents a challenge to veterinarians due to the small size of the newborns and their physiological particularities. In addition, many Veterinary Medicine colleges around the world do not include neonatology in the curriculum, which makes it less likely for the veterinarian to have basic knowledge regarding neonatal care and worsens the clinical care these patients receive. Therefore, lack of assistance and negligence have become frequent in the field, which contributes towards the high mortality rates. This study aims at describing cases and the main conditions pertaining to the neonatology clinical routine in cats and dogs, highlighting the importance of specialized care in this field of Veterinary Medicine. The study included 808 neonates admitted to the São Paulo State University (UNESP) Veterinary Hospital, Botucatu, São Paulo, Brazil, between January 2018 and November 2019. Of these, 87.3% (705/808) were dogs and 12.7% (103/808) were cats. Among the neonates admitted, 57.3% (463/808) came from emergency c-sections due to dystocia, 8.7% (71/808) cane from vaginal deliveries with obstetric maneuvers due to dystocia, and 34% (274/808) were admitted for clinical care due to neonatal conditions. Among the neonates that came from emergency c-sections and vaginal deliveries, 47.3% (253/534) was born in respiratory distress due to severe hypoxia or persistent apnea and required resuscitation procedure, such as the Jen Chung acupuncture point (VG26), oxygen therapy with mask, pulmonary expansion with resuscitator, heart massages and administration of emergency medication, such as epinephrine. On the other hand, in the neonatal clinical care, the main conditions and alterations observed in the newborns were omphalophlebitis, toxic milk syndrome, neonatal conjunctivitis, swimmer puppy syndrome, neonatal hemorrhagic syndrome, pneumonia, trauma, low weight at birth, prematurity, congenital malformations (cleft palate, cleft lip, hydrocephaly, anasarca, vascular anomalies in the heart, anal atresia, gastroschisis, omphalocele, among others), neonatal sepsis and other local and systemic bacterial infections, viral infections (feline respiratory complex, parvovirus, canine distemper, canine infectious traqueobronchitis), parasitical infections (Toxocara spp., Ancylostoma spp., Strongyloides spp., Cystoisospora spp., Babesia spp. and Giardia spp.) and fungal infections (dermatophytosis by Microsporum canis). The most common clinical presentation observed was the neonatal triad (hypothermia, hypoglycemia and dehydration), affecting 74.6% (603/808) of the patients. The mortality rate among the neonates was 10.5% (85/808). Being knowledgeable about neonatology is essential for veterinarians to provide adequate care for these patients in the clinical routine. Adding neonatology to college curriculums, improving the dissemination of information on the subject, and providing annual training in neonatology for veterinarians and employees are important to improve immediate care and reduce the mortality rates.Keywords: neonatal care, puppies, neonatal, conditions
Procedia PDF Downloads 228281 Population Diversity Studies in Dendrocalamus strictus Roxb. (Nees.) Through Morphological Parameters
Authors: Anugrah Tripathi, H. S. Ginwal, Charul Kainthola
Abstract:
Bamboos are considered as valuable resources which have the potential of meeting current economic, environmental and social needs. Bamboo has played a key role in humankind and its livelihood since ancient time. Distributed in diverse areas across the globe, bamboo makes an important natural resource for hundreds of millions of people across the world. In some of the Asian countries and northeast part of India, bamboo is the basis of life on many horizons. India possesses the largest bamboo-bearing area across the world and a great extent of species richness, but this rich genetic resource and its diversity have dwindled in the natural forest due to forest fire, over exploitation, lack of proper management policies, and gregarious flowering behavior. Bamboos which are well known for their peculiar, extraordinary morphology, show a lot of variation in many scales. Among the various bamboo species, Dendrocalamus strictus is the most abundant bamboo resource in India, which is a deciduous, solid, and densely tufted bamboo. This species can thrive in wide gradients of geographical as well as climatic conditions. Due to this, it exhibits a significant amount of variation among the populations of different origins for numerous morphological features. Morphological parameters are the front-line criteria for the selection and improvement of any forestry species. Study on the diversity among eight important morphological characters of D. strictus was carried out, covering 16 populations from wide geographical locations of India following INBAR standards. Among studied 16 populations, three populations viz. DS06 (Gaya, Bihar), DS15 (Mirzapur, Uttar Pradesh), and DS16 (Bhogpur, Pinjore, Haryana) were found as superior populations with higher mean values for parametric characters (clump height, no. of culms/ clump, circumference of clump, internode diameter and internode length) and with the higher sum of ranks in non-parametric characters (straightness, disease, and pest incidence and branching pattern). All of these parameters showed an ample amount of variations among the studied populations and revealed a significant difference among the populations. Variation in morphological characters is very common in a species having wide distribution and is usually evident at various levels, viz., between and within the populations. They are of paramount importance for growth, biomass, and quick production gains. Present study also gives an idea for the selection of the population on the basis of these morphological parameters. From this study on morphological parameters and their variation, we may find an overview of best-performing populations for growth and biomass accumulation. Some of the studied parameters also provide ideas to standardize mechanisms of selecting and sustainable harvesting of the clumps by applying simpler silvicultural systems so that they can be properly managed in homestead gardens for the community utilization as well as by commercial growers to meet the requirement of industries and other stakeholders.Keywords: Dendrocalamus strictus, homestead garden, gregarious flowering, stakeholders, INBAR
Procedia PDF Downloads 76280 Manufacturing the Authenticity of Dokkaebi’s Visual Representation in Tourist Marketing
Authors: Mikyung Bak
Abstract:
The dokkaebi, a beloved icon of Korean culture, is represented as an elf, goblin, monster, dwarf, or any similar creature in different media, such as animated shows, comics, soap operas, and movies. It is often described as a mythical creature with a horn or horns and long teeth, wearing tiger-skin pants or a grass skirt, and carrying a magic stick. Many Korean researchers agree on the similarity of the image of the Korean dokkaebi with that of the Japanese oni, a view that is regard as negative from an anti-colonial or nationalistic standpoint. They cite such similarity between the two mythical creatures as evidence that Japanese colonialism persists in Korea. The debate on the originality of dokkaebi’s visual representation is an issue that must be addressed urgently. This research demonstrates through a diagram the plurality of interpretations of dokkaebi’s visual representations in what are considered ‘authentic’ images of dokkaebi in Korean art and culture. This diagram presents the opinions of four major groups in the debate, namely, the scholars of Korean literature and folklore, art historians, authors, and artists. It also shows the creation of new dokkaebi visual representations in popular media, including those influenced by the debate. The diagram further proves that dokkaebi’s representations varied, which include the typical persons or invisible characters found in Korean literature, original Korean folk characters in traditional art, and even universal spirit characters. They are also visually represented by completely new creatures as well as oni-based mythical beings and the actual oni itself. The earlier dokkaebi representations were driven by the creation of a national ideology or national cultural paradigm and, thus, were more uniform and protected. In contrast, the more recent representations are influenced by the Korean industrial strategy of ‘cultural economics,’ which is concerned with the international rather than the domestic market. This recent Korean cultural strategy emphasizes diversity and commonality with the global culture rather than originality and locality. It employs traditional cultural resources to construct a global image. Consequently, dokkaebi’s recent representations have become more common and diverse, thereby incorporating even oni’s characteristics. This argument has rendered the grounds of the debate irrelevant. The dokkaebi has been used recently for tourist marketing purposes, particularly in revitalizing interest in regions considered the cradle of various traditional dokkaebi tales. These campaign strategies include the Jeju-do Dokkaebi Park, Koksung Dokkaebi Land, as well as the Taebaek and Sokri-san Dokkaebi Festivals. Almost dokkaebi characters are identical to the Japanese oni in tourist marketing. However, the pursuit for dokkaebi’s authentic visual representation is less interesting and fruitful than the appreciation of the entire spectrum of dokkaebi images that have been created. Thus, scholars and stakeholders must not exclude the possibilities for a variety of potentials within the visual culture. The same sentiment applies to traditional art and craft. This study aims to contribute to a new visualization of the dokkaebi that embraces the possibilities of both folk craft and art, which continue to be uncovered by diverse and careful researchers in a still-developing field.Keywords: Dokkaebi, post-colonial period, representation, tourist marketing
Procedia PDF Downloads 279279 Understanding Governance of Biodiversity-Supporting and Edible Landscapes Using Network Analysis in a Fast Urbanising City of South India
Authors: M. Soubadra Devy, Savitha Swamy, Chethana V. Casiker
Abstract:
Sustainable smart cities are emerging as an important concept in response to the exponential rise in the world’s urbanizing population. While earlier, only technical, economic and governance based solutions were considered, more and more layers are being added in recent times. With the prefix of 'sustainability', solutions which help in judicious use of resources without negatively impacting the environment have become critical. We present a case study of Bangalore city which has transformed from being a garden city and pensioners' paradise to being an IT city with a huge, young population from different regions and diverse cultural backgrounds. This has had a big impact on the green spaces in the city and the biodiversity that they support, as well as on farming/gardening practices. Edible landscapes comprising farms lands, home gardens and neighbourhood parks (NPs henceforth) were examined. The land prices of areas having NPs were higher than those that did not indicate an appreciation of their aesthetic value. NPs were part of old and new residential areas largely managed by the municipality. They comprised manicured gardens which were similar in vegetation structure and composition. Results showed that NPs that occurred in higher density supported reasonable levels of biodiversity. In situations where NPs occurred in lower density, the presence of a larger green space such as a heritage park or botanical garden enhanced the biodiversity of these parks. In contrast, farm lands and home gardens which were common within the city are being lost at an unprecedented scale to developmental projects. However, there is also the emergence of a 'neo-culture' of home-gardening that promotes 'locovory' or consumption of locally grown food as a means to a sustainable living and reduced carbon footprint. This movement overcomes the space constraint by using vertical and terrace gardening techniques. Food that is grown within cities comprises of vegetables and fruits which are largely pollinator dependent. This goes hand in hand with our landscape-level study that has shown that cities support pollinator diversity. Maintaining and improving these man-made ecosystems requires analysing the functioning and characteristics of the existing structures of governance. Social network analysis tool was applied to NPs to examine relationships, between actors and ties. The management structures around NPs, gaps, and means to strengthen the networks from the current state to a near-ideal state were identified for enhanced services. Learnings from NPs were used to build a hypothetical governance structure and functioning of integrated governance of NPs and edible landscapes to enhance ecosystem services such as biodiversity support, food production, and aesthetic value. They also contribute to the sustainability axis of smart cities.Keywords: biodiversity support, ecosystem services, edible green spaces, neighbourhood parks, sustainable smart city
Procedia PDF Downloads 139278 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 250