Search results for: peace studies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11680

Search results for: peace studies

1990 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation

Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla

Abstract:

Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.

Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol

Procedia PDF Downloads 341
1989 Identification and Management of Septic Arthritis of the Untouched Glenohumeral Joint

Authors: Sumit Kanwar, Manisha Chand, Gregory Gilot

Abstract:

Background: Septic arthritis of the shoulder has infrequently been discussed. Focus on infection of the untouched shoulder has not heretofore been described. We present four patients with glenohumeral septic arthritis. Methods: Case 1: A 59 year old male with left shoulder pain in the anterior, posterior and superior aspects. Case 2: A 60 year old male with fever, chills, and generalized muscle aches. Case 3: A 70 year old male with right shoulder pain about the anterior and posterior aspects. Case 4: A 55 year old male with global right shoulder pain, swelling, and limited ROM. Results: In case 1, the left shoulder was affected. Physical examination, swelling was notable, there was global tenderness with a painful range of motion (ROM). The lab values indicated an erythrocyte sedimentation rate (ESR) of 96, and a C-reactive protein (CRP) of 304.30. Imaging studies were performed and MRI indicated a high suspicion for an abscess with osteomyelitis of the humeral head. Our second case’s left arm was affected. He had swelling, global tenderness and painful ROM. His ESR was 38, CRP was 14.9. X-ray showed severe arthritis. Case 3 differed with the right arm being affected. Again, global tenderness and painful ROM was observed. His ESR was 94, and CRP was 10.6. X-ray displayed an eroded glenoid space. Our fourth case’s right shoulder was affected. He had global tenderness and painful, limited ROM. ESR was 108 and CRP was 2.4. X-ray was non-significant. Discussion: Monoarticular septic arthritis of the virgin glenohumeral joint is seldom diagnosed in clinical practice. Common denominators include elevated ESR, painful, limited ROM, and involvement of the dominant arm. The male population is more frequently affected with an average age of 57. Septic arthritis is managed with incision and drainage or needle aspiration of synovial fluid supplemented with 3-6 weeks of intravenous antibiotics. Due to better irrigation and joint visualization, arthroscopy is preferred. Open surgical drainage may be indicated if the above methods fail. Conclusion: If a middle-aged male presents with vague anterior or posterior shoulder pain, elevated inflammatory markers and a low grade fever, an x-ray should be performed. If this displays degenerative joint disease, the complete further workup with advanced imaging, such as an MRI, CT scan, or an ultrasound. If these imaging modalities display anterior space joint effusion with soft tissue involvement, we can suspect septic arthritis of the untouched glenohumeral joint and surgery is indicated.

Keywords: glenohumeral joint, identification, infection, septic arthritis, shoulder

Procedia PDF Downloads 422
1988 Concepts of Creation and Destruction as Cognitive Instruments in World View Study

Authors: Perizat Balkhimbekova

Abstract:

Evolutionary changes in cognitive world view taking place in the last decades are followed by changes in perception of the key concepts which are related to the certain lingua-cultural sphere. Also, such concepts reflect the person’s attitude to essential processes in the sphere of concepts, e.g. the opposite operations like creation and destruction. These changes in people’s life and thinking are displayed in a language world view. In order to open the maintenance of mental structures and concepts we should use language means as observable results of people’s cognitive activity. Semantics of words, free phrases and idioms should be considered as an authoritative source of information concerning concepts. The regularized set of concepts in people consciousness forms the sphere of concepts. Cognitive linguistics widely discusses the sphere of concepts as its crucial category defining it as the field of knowledge which is made of concepts. It is considered that a sphere of concepts comprises the various types of association and forms conceptual fields. As a material for the given research, the data from Russian National Corpus and British National Corpus were used. In is necessary to point out that data provided by computational studies, are intrinsic and verifiable; so that we have used them in order to get the reliable results. The procedure of study was based on such techniques as extracting of the context containing concepts of creation|destruction from the Russian National Corpus (RNC), and British National Corpus (BNC); analyzing and interpreting of those context on the basis of cognitive approach; finding of correspondence between the given concepts in the Russian and English world view. The key problem of our study is to find the correspondence between the elements of world view represented by opposite concepts such as creation and destruction. Findings: The concept of "destruction" indicates a process which leads to full or partial destruction of an object. In other words, it is a loss of the object primary essence: structures, properties, distinctive signs and its initial integrity. The concept of "creation", on the contrary, comprises positive characteristics, represents the activity aimed at improvement of the certain object, at the creation of ideal models of the world. On the other hand, destruction is represented much more widely in RNC than creation (1254 cases of the first concept by comparison to 192 cases for the second one). Our hypothesis consists in the antinomy represented by the aforementioned concepts. Being opposite both in respect of semantics and pragmatics, and from the point of view of axiology, they are at the same time complementary and interrelated concepts.

Keywords: creation, destruction, concept, world view

Procedia PDF Downloads 346
1987 Efficacy of Knowledge Management Practices in Selected Public Libraries in the Province of Kwazulu-Natal, South Africa

Authors: Petros Dlamini, Bethiweli Malambo, Maggie Masenya

Abstract:

Knowledge management practices are very important in public libraries, especial in the era of the information society. The success of public libraries depends on the recognition and application of knowledge management practices. The study investigates the value and challenges of knowledge management practices in public libraries. Three research objectives informed the study: to identify knowledge management practices in public libraries, understand the value of knowledge management practices in public libraries, and determine the factors hampering knowledge management practices in public libraries. The study was informed by the interpretivism research paradigm, which is associated with qualitative studies. In that light, the study collected data from eight librarians and or library heads, who were purposively selected from public libraries. The study adopted a social anthropological approach, which thoroughly evaluated each participant's response. Data was collected from the respondents through telephonic semi-structured interviews and assessed accordingly. Furthermore, the study used the latest content concept for data interpretation. The chosen data analysis method allowed the study to achieve its main purpose with concrete and valid information. The study's findings showed that all six (100%) selected public libraries apply knowledge management practices. The findings of the study revealed that public libraries have knowledge sharing as the main knowledge management practice. It was noted that public libraries employ many practices, but each library employed its practices of choice depending on their knowledge management practices structure. The findings further showed that knowledge management practices in public libraries are employed through meetings, training, information sessions, and awareness, to mention a few. The findings revealed that knowledge management practices make the libraries usable. Furthermore, it has been asserted that knowledge management practices in public libraries meet users’ needs and expectations and equip them with skills. It was discovered that all participating public libraries from Umkhanyakude district municipality valued their knowledge management practices as the pillar and foundation of services. Noticeably, knowledge management practices improve users ‘standard of living and build an information society. The findings of the study showed that librarians should be responsible for the value of knowledge management practices as they are qualified personnel. The results also showed that 83.35% of public libraries had factors hampering knowledge management practices. The factors are not limited to shortage of funds, resources and space, and political interference. Several suggestions were made to improve knowledge management practices in public libraries. These suggestions include improving the library budget, increasing libraries’ building sizes, and conducting more staff training.

Keywords: knowledge management, knowledge management practices, storage, dissemination

Procedia PDF Downloads 94
1986 An investigation of the High-frequency Isolation Performance of Quasi-Zero-Stiffness Vibration Isolators

Authors: Chen Zhang, Yongpeng Gu, Xiaotian Li

Abstract:

Quasi-zero-stiffness (QZS) vibration isolation technology has garnered significant attention in both academia and industry, which enables ultra-low-frequency vibration isolation. In modern industries, such as shipbuilding and aerospace, rotating machinery generates vibrations over a wide frequency range, thus imposing more stringent requirements on vibration isolation technologies. These technologies must not only achieve ultra-low starting isolation frequencies but also provide effective isolation across mid- to high-frequency ranges. However, existing research on QZS vibration isolators primarily focuses on frequency ranges below 50 Hz. Moreover, studies have shown that in the mid-to high-frequency ranges, QZS isolators tend to generate resonance peaks that adversely affect their isolation performance. This limitation significantly restricts the practical applicability of QZS isolation technology. To address this issue, the present study investigates the high-frequency isolation performance of two typical QZS isolators: the mechanism type three-spring QZS isolator mechanism and the structure and bowl-shaped QZS isolator structure. First, the parameter conditions required to achieve quasi-zero stiffness characteristics for two isolators are derived based on static mechanical analysis. The theoretical transmissibility characteristics are then calculated using the harmonic balance method. Three-dimensional finite element models of two QZS isolators are developed using ABAQUS simulation software, and transmissibility curves are computed for the 0-500 Hz frequency range. The results indicate that the three-spring QZS mechanism exhibits multiple higher-order resonance peaks in the mid-to high-frequency ranges due to the higher-order models of the springs. Springs with fewer coils and larger diameters can shift the higher-order modals to higher frequencies but cannot entirely eliminate their occurrence. In contrast, the bowl-shaped QZS isolator, through shape optimization using a spline-based representation, effectively mitigates the generation of higher-order resonance modes, resulting in superior isolation performance in the mid-to high-frequency ranges. This study provides essential theoretical insights for optimizing the vibration isolation performance of QZS technologies in complex, wide-frequency vibration environments, offering significant practical value for their application.

Keywords: quasi-zero-stiffness, wide-frequency vibration, vibration isolator, transmissibility

Procedia PDF Downloads 3
1985 The Perceived Impact of Consultancy Organisations and Social Enterprises: Converging and Diverging Discourses

Authors: Seda Muftugil-Yalcin

Abstract:

With the proliferation of the number of social enterprises worldwide, there is now a whole ecosystem full of different organisational actors revolving around social enterprises. Impact hubs, incubation centers, and organisations (profit or non-profit) that offer consultancy services to social enterprises can be said to constitute one such cluster in the eco-system. These organisations offer a variety of services to social enterprises which desire to maximize their positive social impact. Especially with regards to impact measurement, there are numerous systems/guides/approaches/tools developed that claim to benefit social enterprises. Many organisations choose one of the existing tools and craft programs that help social enterprises to measure and to manage their social impacts. However, empirical evidence with regards to how the services of these consultancy organisations are precisely utilized on the field is scarce. This inevitably casts doubt on the impact of these organisations themselves. This research dwells on four case studies from the Netherlands and Turkey. In each country, two university-affiliated impact centers and two independent consultancy agencies that work with social entrepreneurs in the area of social impact measurement are closely examined. The overarching research question has been 'With regards to impact measurement, how do the founders/managers of these organisations perceive and make sense of their contribution to social enterprises and to the social entrepreneurship eco-system at large?' As for methodology, in-depth interviews were carried out with the managers/founders of these organisations and discourse analysis method has been used for data analysis together with grounded theory. The comparison between Turkey and Netherlands elucidate common denominators of impact measurement hype and discourses that are currently existing worldwide. In addition, it also reveals differing priorities of social enterprises in these different settings, which shape the expectations of social enterprises of consultancy organisations. Comparison between university affiliated impact hubs and independent consultancy organisations also give away important data about how different forms of consultancy organisations (in this case university based and independent) position themselves in relation to alike organisations with similar aims. The overall aim of the research is to reveal the contribution of the consultancy organisations that work with social enterprises to the social entrepreneurship field as perceived by them through a cross cultural study. The findings indicate that in both settings, the organisations that were claiming to bring positive social impact on the social entrepreneurship eco-system through their impact measurement trainings were themselves having a hard time in concretizing their own contributions; which indicated that these organisations were in need of a different impact measurement discourse than the ones they were championing.

Keywords: consultancy organisations, social entrepreneurship, social impact measurement, social impact discourse

Procedia PDF Downloads 122
1984 Comparison of Home Ranges of Radio Collared Jaguars (Panthera onca L.) in the Dry Chaco and Wet Chaco of Paraguay

Authors: Juan Facetti, Rocky McBride, Karina Loup

Abstract:

The Chaco Region of Paraguay is a key biodiverse area for the conservation of jaguars (Panthera onca), the largest feline of the Americas. It comprises five eco-regions, which holds important but decreasing populations of this species. The last decades, the expansion of soybean over the Atlantic Forest, forced the translocation of cattle-ranches towards the Chaco. Few studies of Jaguar's population densities in the American hemisphere were done until now. In the region, the specie is listed as vulnerable or threatened and more information is needed to implement any conservation policy. Among the factors that threaten the populations are land-use change, habitat fragmentation, prey depletion and illegal hunting. Two largest eco-regions were studied: the Wet Chaco and the Dry Chaco. From 2002 more than 20 jaguars were captured and fitted with GPS-collar. Data collected from 11 GPS-collars were processed, transformed numerically and finally converted into maps for analyzing. 8.092 locations were determined for four adult females (AF) and one adult male (AM) in the Wet Chaco, and one AF, one juvenile male (JM) and four AM in the Dry Chaco, during 1,867 days. GIS and kernel methodology were used to calculate daily distance of movement, home range-HR (95% isopleth), and core area (considered as 50% isopleth). In the Wet Chaco HR were 56 Km2 and 238 km2 for females and males respectively; while in the Dry Chaco HR were 685 Km2 and 844.5 km2 for females and males respectively, and 172 Km2 for a juvenile. Core areas of individual activity for each jaguar, were on average 11.5 Km2 and 33.55 km2 for AF and AM respectively in the Wet Chaco, while in the Dry Chaco were larger: 115 km2 for five AM and 225 Km2 for an AF and 32.4 Km2 for a JM. In both ecoregions, only one relevant overlap of HR of adults was reported. During the reproduction season, the HR (95% K) of one AM overlapped 49.83% with that of one AF. At the Wet Chaco, the maximum daily distance moved by an AF was 14.5 Km and 11.6 Km for the AM, while the Maximum Mean Daily Moved (MMDM) distance was 5.6 km for an AF and 3.1 km for an AM. At the Dry Chaco, the maximum daily distance for an AF was 61.7Km., 50.9Km for the AM and 6.6 Km for the JM, while the MMDM distance was 13.2 km for an AM and 8.4 km for an AF. This study confirmed that, as the invasion to jaguar habitat increased, it resulted in fragmented landscapes that influence spacing patterns of jaguars. Males used largest HR that of the smaller females and males covers largest distances that of the females. There appeared to be important spatial segregation between not only females but also males. It is likely that the larger areas used by males are partly caused by the sexual dimorphism in body size that entails differences in prey requirements. These could explain the larger distances travelled daily by males.

Keywords: Chaco ecoregions, Jaguar, home range, Panthera onca, Paraguay

Procedia PDF Downloads 302
1983 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 184
1982 Musically Yours: Impact of Social Media Advertisement Music per the Circadian Rhythm

Authors: Payal Bose

Abstract:

The impact of music on consumers' attention and emotions at different parts of the day are rarely/never studied. Music has been widely studied in different parameters, such as in-store music and its atmospheric effects, to understand consumer arousal, in-store traffic, perceptions of visual stimuli, and actual time spent in the store. Further other parameters such as tempo, shopper's age, volume, music preference, and its usage as foreground or background music acting as a mediator and impacting consumer behavior is also well researched. However, no study has traversed the influence of music on social media advertisements and its impact on the consumer mind. Most studies have catered to the influence of music on consumers conscious. A recent study found that playing pleasant music is more effective on weekdays in enhancing supermarkets' sales than on weekends. This led to a more pertinent question about the impact of music on different parts of the day and how it impacts the attention and emotion in the consumers’ mind is an interesting question to be asked given the fact that there is a high usage of social media advertisement consumption in the recent past on a day-to-day basis. This study would help brands on social media to structure their advertisements and engage more consumers towards their products. Prior literature has examined the effects or influence of music on consumers largely in retail, brick-and-mortar format. Hence most of the outcomes are favorable for physical retail environments. However, with the rise of Web 3.0 and social media marketing, it would be interesting to see how consumers' attention and emotion can be studied with the effects of music embedded in an advertisement during different parts of the day. A smartphone is considered a personal gadget, and viewing social media advertisements on them is mostly an intimate experience. Hence in a social media advertisement, most of the viewing happens on a one-on-one basis between the consumer and the brand advertisement. To the best of our knowledge, little or no work has explored the influence of music on different parts of the day (per the circadian rhythm) in advertising research. Previous works on social media advertisement have explored the timing of social media posts, deploying Targeted Content Advertising, appropriate content, reallocation of time, and advertising expenditure. Hence, I propose studying advertisements embedded with music during different parts of the day and its influence on consumers' attention and emotions. To address the research objectives and knowledge gap, it is intended to use a neuroscientific approach using fMRI and eye-tracking. The influence of music embedded in social media advertisement during different parts of the day would be assessed.

Keywords: music, neuromarketing, circadian rhythm, social media, engagement

Procedia PDF Downloads 65
1981 Attention and Creative Problem-Solving: Cognitive Differences between Adults with and without Attention Deficit Hyperactivity Disorder

Authors: Lindsey Carruthers, Alexandra Willis, Rory MacLean

Abstract:

Introduction: It has been proposed that distractibility, a key diagnostic criterion of Attention Deficit Hyperactivity Disorder (ADHD), may be associated with higher creativity levels in some individuals. Anecdotal and empirical evidence has shown that ADHD is therefore beneficial to creative problem-solving, and the generation of new ideas and products. Previous studies have only used one or two measures of attention, which is insufficient given that it is a complex cognitive process. The current study aimed to determine in which ways performance on creative problem-solving tasks and a range of attention tests may be related, and if performance differs between adults with and without ADHD. Methods: 150 adults, 47 males and 103 females (mean age=28.81 years, S.D.=12.05 years), were tested at Edinburgh Napier University. Of this set, 50 participants had ADHD, and 100 did not, forming the control group. Each participant completed seven attention tasks, assessing focussed, sustained, selective, and divided attention. Creative problem-solving was measured using divergent thinking tasks, which require multiple original solutions for one given problem. Two types of divergent thinking task were used: verbal (requires written responses) and figural (requires drawn responses). Each task is scored for idea originality, with higher scores indicating more creative responses. Correlational analyses were used to explore relationships between attention and creative problem-solving, and t-tests were used to study the between group differences. Results: The control group scored higher on originality for figural divergent thinking (t(148)= 3.187, p< .01), whereas the ADHD group had more original ideas for the verbal divergent thinking task (t(148)= -2.490, p < .05). Within the control group, figural divergent thinking scores were significantly related to both selective (r= -.295 to -.285, p < .01) and divided attention (r= .206 to .290, p < .05). Alternatively, within the ADHD group, both selective (r= -.390 to -.356, p < .05) and divided (r= .328 to .347, p < .05) attention are related to verbal divergent thinking. Conclusions: Selective and divided attention are both related to divergent thinking, however the performance patterns are different between each group, which may point to cognitive variance in the processing of these problems and how they are managed. The creative differences previously found between those with and without ADHD may be dependent on task type, which to the author’s knowledge, has not been distinguished previously. It appears that ADHD does not specifically lead to higher creativity, but may provide explanation for creative differences when compared to those without the disorder.

Keywords: ADHD, attention, creativity, problem-solving

Procedia PDF Downloads 456
1980 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes

Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez

Abstract:

In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.

Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard

Procedia PDF Downloads 176
1979 Loss of Control Eating as a Key Factor of the Psychological Symptomatology Related to Childhood Obesity

Authors: L. Beltran, S. Solano, T. Lacruz, M. Blanco, M. Rojo, M. Graell, A. R. Sepulveda

Abstract:

Introduction and Objective: Given the difficulties of assessing Binge Eating Disorder during childhood, episodes of Loss of Control (LOC) eating can be a key symptom. The objective is to know the prevalence of food psychopathology depending on the type of evaluation and find out which psychological characteristics differentiate overweight or obese children who present LOC from those who do not. Material and Methods: 170 children from 8 to 12 years of age with overweight or obesity (P > 85) were evaluated through the Primary Care Centers of Madrid. Sociodemographic data and psychological measures were collected through the Kiddie-Schedule for Affective Disorders & Schizophrenia, Present & Lifetime Version (K-SADS-PL) diagnostic interview and self-applied questionnaires: Children's eating attitudes (ChEAT), depressive symptomatology (CDI), anxiety (STAIC), general self-esteem (LAWSEQ), body self-esteem (BES), perceived teasing (POTS) and perfectionism (CAPS). Results: 15.2% of the sample exceeded the ChEAT cut-off point, presenting a risk of pathological eating; 5.88% presented an Eating Disorder through the diagnostic interview (2.35% Binge Eating disorder), and 33.53% had LOC episodes. No relationship was found between the presence of LOC and clinical diagnosis of eating disorders according to DSM-V; however, the group with LOC presented a higher risk of eating psychopathology using the ChEAT (p < .02). Significant differences were found in the group with LOC (p < .02): higher z-BMI, lower body self-esteem, greater anxious symptomatology, greater frequency of teasing towards weight, and greater effect of teasing both towards weight and competitions; compared to their peers without LOC. Conclusion: According to previous studies in samples with overweight children, in this Spanish sample of children with obesity, we found a prevalence of moderate eating disorder and a high presence of LOC episodes, which is related to both eating and general psychopathology. These findings confirm that the exclusion of LOC episodes as a diagnostic criterion can underestimate the presence of eating psychopathology during this developmental stage. According to these results, it is highly recommended to promote school context programs that approach LOC episodes in order to reduce associated symptoms. This study is included in a Project funded by the Ministry of Innovation and Science (PSI2011-23127).

Keywords: childhood obesity, eating psychopathology, loss-of-control eating, psychological symptomatology

Procedia PDF Downloads 106
1978 Hierarchy and Weight of Influence Factors on Labor Productivity in the Construction Industry of the Nepal

Authors: Shraddha Palikhe, Sunkuk Kim

Abstract:

The construction industry is the most labor intensive in Nepal. It is obvious that construction is a major sector and any productivity enhancement activity in this sector will have a positive impact in the overall improvement of the national economy. Previous studies have stated that Nepal has poor labor productivity among other south Asian countries. Though considerable research has been done on productivity factors in other countries, no study has addressed labor productivity issues in Nepal. Therefore, the main objective of this study is to identify and hierarchy the influence factors for poor labor productivity. In this study, a questionnaire approach is chosen as a method of the survey from thirty experts involved in the construction industry, such as Architects, Civil Engineers, Project Engineers and Site Engineers. A survey was conducted in Nepal, to identify the major factors impacting construction labor productivity. Analytic Hierarchy Process (AHP) analysis method was used to understand the underlying relationships among the factors, categorized into five groups, namely (1) Labor-management group; (2) Material management group; (3) Human labor group; (4) Technological group and (5) External group and was divided into 33 subfactors. AHP was used to establish the relative importance of the criteria. The AHP makes pairwise comparisons of relative importance between hierarchy elements grouped by labor productivity decision criteria. Respondents were asked to answer based on their experience of construction works. On the basis of the respondent’s response, weight of all the factors were calculated and ranked it. The AHP results were tabulated based on weight and ranking of influence factors. AHP model consists of five main criteria and 33 sub-criteria. Among five main criteria, the scenario assigns a weight of highest influential factor i.e. 26.15% to human labor group followed by 23.01% to technological group, 22.97% to labor management group, 17.61% material management group and 10.25% to external group. While in 33 sub-criteria, the most influential factor for poor productivity in Nepal are lack of monetary incentive (20.53%) for human labor group, unsafe working condition (17.55%) for technological group, lack of leadership (18.43%) for labor management group, unavailability of tools at site (25.03%) for material management group and strikes (35.01%) for external group. The results show that AHP model associated criteria are helpful to predict the current situation of labor productivity. It is essential to consider these influence factors to improve the labor productivity in the construction industry of Nepal.

Keywords: construction, hierarchical analysis, influence factors, labor productivity

Procedia PDF Downloads 404
1977 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 257
1976 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 75
1975 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies

Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr

Abstract:

Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.

Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool

Procedia PDF Downloads 232
1974 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters

Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić

Abstract:

The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.

Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water

Procedia PDF Downloads 103
1973 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity

Authors: Denise Bianco

Abstract:

Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.

Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity

Procedia PDF Downloads 97
1972 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India

Authors: Senthil Kumar Anantharaman

Abstract:

Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.

Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints

Procedia PDF Downloads 164
1971 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models

Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal

Abstract:

Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.

Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics

Procedia PDF Downloads 61
1970 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance

Authors: Rahul Bishnoi

Abstract:

Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.

Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre

Procedia PDF Downloads 110
1969 Emerging VC Industry and the Important Role of Marketing Expectations in Project Selection: Evidence on Russian Data

Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova

Abstract:

Currently, the venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating prosperity of the modern economy. Actually, in Russia there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. Because of the research, the participation of investors with first-class reputation has a small impact on an indicator of the value of investment of the second round. The expected positive dependence of the second round investments on the forecasted market growth rate now of the deal is also rejected. So, the most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams which can attract more money on the start, and the target market growth is not the factor of crucial importance.

Keywords: venture industry, venture investment, determinants of the venture sector development, IT-sector

Procedia PDF Downloads 353
1968 Energy Efficiency Line Guides for School Buildings in Florence in a Postgraduate Master Course

Authors: Lucia Ceccherini Nelli, Alessandra Donato

Abstract:

The ABITA Master course of the University of Florence offered by the Department of Architecture covers nearly all the energy-relevant issues that can arise in public and private companies and sectors. The main purpose of the Master course, active since 2003, is to analyse the energy consumption of building technologies, components, and structures at the conceptual design stage, so it could be very helpful, for designers, when making decisions related to the selection of the most suitable design alternatives and for the materials choice that will be used in an energy-efficient building. The training course provides a solid basis for increasing the knowledge and skills of energy managers and is developed with an emphasis on practical experiences related to the knowledge through case studies, measurements, and verification of energy-efficient solutions in buildings, in the industry and in the cities. The main objectives are: i)To raise the professional standards of those engaged in energy auditing, ii) To improve the practice of energy auditors by encouraging energy auditing professionals in a continuing education program of professional development, iii) Implement in the use of instrumentations for the typical measurements, iv) To propose an integrated methodology that links energy analysis tools with green building certification systems. This methodology will be applied at the early design stage of a project’s life. The final output of the practical training is to achieve an elevated professionalism in the study of environmental design and Energy management in buildings. The results are the redaction of line guides instruction for the energy refurbishment of Public schools in Florence. The school heritage of the Municipality of Florence requires interventions for the control of energy performance, as old construction buildings are often made without taking into account the necessary envelope performance. For this reason, every year, the Master's course aims to study groups of public schools to enable the Municipality to carry out energy redevelopment interventions on the existing building heritage. The future challenges of the education and training program are related to follow-up activities, the development of interactive tools and the curriculum's customization to meet the constantly growing needs of energy experts from industry.

Keywords: expert in energy, energy auditing, public buildings, thermal analysis

Procedia PDF Downloads 189
1967 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 517
1966 Investigating the Indoor Air Quality of the Respiratory Care Wards

Authors: Yu-Wen Lin, Chin-Sheng Tang, Wan-Yi Chen

Abstract:

Various biological specimens, drugs, and chemicals exist in the hospital. The medical staffs and hypersensitive inpatients expose might expose to multiple hazards while they work or stay in the hospital. Therefore, the indoor air quality (IAQ) of the hospital should be paid more attention. Respiratory care wards (RCW) are responsible for caring the patients who cannot spontaneously breathe without the ventilators. The patients in RCW are easy to be infected. Compared to the bacteria concentrations of other hospital units, RCW came with higher values in other studies. This research monitored the IAQ of the RCW and checked the compliances of the indoor air quality standards of Taiwan Indoor Air Quality Act. Meanwhile, the influential factors of IAQ and the impacts of ventilator modules, with humidifier or with filter, were investigated. The IAQ of two five-bed wards and one nurse station of a RCW in a regional hospital were monitored. The monitoring was proceeded for 16 hours or 24 hours during the sampling days with a sampling frequency of 20 minutes per hour. The monitoring was performed for two days in a row and the AIQ of the RCW were measured for eight days in total. The concentrations of carbon dioxide (CO₂), carbon monoxide (CO), particulate matter (PM), nitrogen oxide (NOₓ), total volatile organic compounds (TVOCs), relative humidity (RH) and temperature were measured by direct reading instruments. The bioaerosol samples were taken hourly. The hourly air change rate (ACH) was calculated by measuring the air ventilation volume. Human activities were recorded during the sampling period. The linear mixed model (LMM) was applied to illustrate the impact factors of IAQ. The concentrations of CO, CO₂, PM, bacterial and fungi exceeded the Taiwan IAQ standards. The major factors affecting the concentrations of CO, PM₁ and PM₂.₅ were location and the number of inpatients. The significant factors to alter the CO₂ and TVOC concentrations were location and the numbers of in-and-out staff and inpatients. The number of in-and-out staff and the level of activity affected the PM₁₀ concentrations statistically. The level of activity and the numbers of in-and-out staff and inpatients are the significant factors in changing the bacteria and fungi concentrations. Different models of the patients’ ventilators did not affect the IAQ significantly. The results of LMM can be utilized to predict the pollutant concentrations under various environmental conditions. The results of this study would be a valuable reference for air quality management of RCW.

Keywords: respiratory care ward, indoor air quality, linear mixed model, bioaerosol

Procedia PDF Downloads 107
1965 Towards a Sustainable High Population Density Urban Intertextuality – Program Re-Configuration Integrated Urban Design Study in Hangzhou, China

Authors: Xuan Li, Lei Xu

Abstract:

By the end of 2014, China has an urban population of 749 million, reaching the urbanization rate of 54.77%. Dense and vertical urban structure has become a common choice for China and most of the densely populated Asian countries for sustainable development. This paper focuses on the most conspicuous urban change period in China, from 2000 to 2010, during which China's population shifted the fastest from rural region to cities. On one hand, the 200 million nationwide "new citizen" along with the 456 million "old citizen" explored in the new-century city for new urban lifestyle and livable built environment; On the other hand, however, large-scale rapid urban constructions are confined to the methods of traditional two-dimensional architectural thinking. Human-oriented design and system thinking have been missing in this intricate postmodern urban condition. This phenomenon, especially the gap and spark between the solid, huge urban physical system and the rich, subtle everyday urban life, will be studied in depth: How the 20th-century high-rise residential building "spontaneously" turned into an old but expensive multi-functional high-rise complex in the 21st century city center; how 21st century new/late 20th century old public buildings with the same function integrated their different architectural forms into the new / old city center? Finally the paper studies cases in Hangzhou: 1) Function Evolve–downtown high-rise residential building “International Garden” and “Zhongshan Garden” (1999). 2) Form Compare–Hangzhou Theater (1998) vs Hangzhou Grand Theatre (2004), Hangzhou City Railway Station (1999) vs Hangzhou East Railway Station (2013). The research aims at the exploring the essence of city from the building form dispel and urban program re-configuration approach, gaining a better consideration of human behavior through compact urban design effort for improving urban intertextuality, searching for a sustainable development path in the crucial time of urban population explosion in China.

Keywords: architecture form dispel, compact urban design, urban intertextuality, urban program re-configuration

Procedia PDF Downloads 498
1964 Synthesis of Multi-Functional Iron Oxide Nanoparticles for Targeted Drug Delivery in Cancer Treatment

Authors: Masome Moeni, Roya Abedizadeh, Elham Aram, Hamid Sadeghi-Abandansari, Davood Sabour, Robert Menzel, Ali Hassanpour

Abstract:

Significant number of studies and preclinical research in formulation of cancer nano-pharmaceutics have led to an improvement in cancer care. Nonetheless, the antineoplastic agents have ‘failed to live up to its promise’ since their clinical performance is moderately low. For almost ninety years, iron oxide nanoparticles (IONPS) have managed to keep its reputation in clinical application due to their low toxicity, versatility and multi-modal capabilities. Drug Administration approved utilization of IONPs for diagnosis of cancer as contrast media in magnetic resonance imaging, as heat mediator in magnetic hyperthermia and for the treatment of iron deficiency. Furthermore, IONPs have high drug-loading capacity, which makes them good candidates as therapeutic agent transporters. There are yet challenges to overcome for successful clinical application of IONPs, including stability of drug and poor delivery, which might lead to (i) drug resistance, (ii) shorter blood circulation time, and (iii) rapid elimination and adverse side effects from the system. In this study, highly stable and super paramagnetic IONPs were prepared for efficient and targeted drug delivery in cancer treatment. The synthesis procedure was briefly involved the production of IONPs via co-precipitation followed by coating with tetraethyl orthosilicate and 3-aminopropylethoxysilane and grafting with folic acid for stability targeted purposes and controlled drug release. Physiochemical and morphological properties of modified IONPs were characterised using different analytical techniques. The resultant IONPs exhibited clusters of 10 nm spherical shape crystals with less than 100 nm size suitable for drug delivery. The functionalized IONP showed mesoporous features, high stability, dispersibility and crystallinity. Subsequently, the functionalized IONPs were successfully loaded with oxaliplatin, a chemotherapeutic agent, for a controlled drug release in an actively targeting cancer cells. FT-IR observations confirmed presence of oxaliplatin functional groups, while ICP-MS results verified the drug loading was ~ 1.3%.

Keywords: cancer treatment, chemotherapeutic agent, drug delivery, iron oxide, multi-functional nanoparticle

Procedia PDF Downloads 83
1963 Spectrophotometric Detection of Histidine Using Enzyme Reaction and Examination of Reaction Conditions

Authors: Akimitsu Kugimiya, Kouhei Iwato, Toru Saito, Jiro Kohda, Yasuhisa Nakano, Yu Takano

Abstract:

The measurement of amino acid content is reported to be useful for the diagnosis of several types of diseases, including lung cancer, gastric cancer, colorectal cancer, breast cancer, prostate cancer, and diabetes. The conventional detection methods for amino acid are high-performance liquid chromatography (HPLC) and liquid chromatography-mass spectrometry (LC-MS), but they have several drawbacks as the equipment is cumbersome and the techniques are costly in terms of time and costs. In contrast, biosensors and biosensing methods provide more rapid and facile detection strategies that use simple equipment. The authors have reported a novel approach for the detection of each amino acid that involved the use of aminoacyl-tRNA synthetase (aaRS) as a molecular recognition element because aaRS is expected to a selective binding ability for corresponding amino acid. The consecutive enzymatic reactions used in this study are as follows: aaRS binds to its cognate amino acid and releases inorganic pyrophosphate. Hydrogen peroxide (H₂O₂) was produced by the enzyme reactions of inorganic pyrophosphatase and pyruvate oxidase. The Trinder’s reagent was added into the reaction mixture, and the absorbance change at 556 nm was measured using a microplate reader. In this study, an amino acid-sensing method using histidyl-tRNA synthetase (HisRS; histidine-specific aaRS) as molecular recognition element in combination with the Trinder’s reagent spectrophotometric method was developed. The quantitative performance and selectivity of the method were evaluated, and the optimal enzyme reaction and detection conditions were determined. The authors developed a simple and rapid method for detecting histidine with a combination of enzymatic reaction and spectrophotometric detection. In this study, HisRS was used to detect histidine, and the reaction and detection conditions were optimized for quantitation of these amino acids in the ranges of 1–100 µM histidine. The detection limits are sufficient to analyze these amino acids in biological fluids. This work was partly supported by Hiroshima City University Grant for Special Academic Research (General Studies).

Keywords: amino acid, aminoacyl-tRNA synthetase, biosensing, enzyme reaction

Procedia PDF Downloads 285
1962 Machine Translation Analysis of Chinese Dish Names

Authors: Xinyu Zhang, Olga Torres-Hostench

Abstract:

This article presents a comparative study evaluating and comparing the quality of machine translation (MT) output of Chinese gastronomy nomenclature. Chinese gastronomic culture is experiencing an increased international acknowledgment nowadays. The nomenclature of Chinese gastronomy not only reflects a specific aspect of culture, but it is related to other areas of society such as philosophy, traditional medicine, etc. Chinese dish names are composed of several types of cultural references, such as ingredients, colors, flavors, culinary techniques, cooking utensils, toponyms, anthroponyms, metaphors, historical tales, among others. These cultural references act as one of the biggest difficulties in translation, in which the use of translation techniques is usually required. Regarding the lack of Chinese food-related translation studies, especially in Chinese-Spanish translation, and the current massive use of MT, the quality of the MT output of Chinese dish names is questioned. Fifty Chinese dish names with different types of cultural components were selected in order to complete this study. First, all of these dish names were translated by three different MT tools (Google Translate, Baidu Translate and Bing Translator). Second, a questionnaire was designed and completed by 12 Chinese online users (Chinese graduates of a Hispanic Philology major) in order to find out user preferences regarding the collected MT output. Finally, human translation techniques were observed and analyzed to identify what translation techniques would be observed more often in the preferred MT proposals. The result reveals that the MT output of the Chinese gastronomy nomenclature is not of high quality. It would be recommended not to trust the MT in occasions like restaurant menus, TV culinary shows, etc. However, the MT output could be used as an aid for tourists to have a general idea of a dish (the main ingredients, for example). Literal translation turned out to be the most observed technique, followed by borrowing, generalization and adaptation, while amplification, particularization and transposition were infrequently observed. Possibly because that the MT engines at present are limited to relate equivalent terms and offer literal translations without taking into account the whole context meaning of the dish name, which is essential to the application of those less observed techniques. This could give insight into the post-editing of the Chinese dish name translation. By observing and analyzing translation techniques in the proposals of the machine translators, the post-editors could better decide which techniques to apply in each case so as to correct mistakes and improve the quality of the translation.

Keywords: Chinese dish names, cultural references, machine translation, translation techniques

Procedia PDF Downloads 137
1961 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes

Authors: Alan Luo, Hunter N. B. Moseley

Abstract:

Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from x-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for x-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across x-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.

Keywords: biomacromolecular structure, coenzyme, electron density discrepancy analysis, x-ray crystallography

Procedia PDF Downloads 130