Search results for: protein structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9800

Search results for: protein structure

860 Cultural Heritage Resources for Tourism, Two Countries – Two Approaches: A Comparative Analysis of Cultural Tourism Products in Turkey and Austria

Authors: Irfan Arikan, George Christian Steckenbauer

Abstract:

Turkey and Austria are examples for highly developed tourism destinations, where tourism providers use cultural heritage and regional natural resources to develop modern tourism products in order to be successful on increasingly competitive international tourism markets. The use and exploitation of these resources follow on the one hand international standards of tourism marketing (as ‘sustainability’). Therefore, we find highly comparable internationalized products in these destinations (like hotel products, museums, spas etc.). On the other hand, development standards and processes strongly depend on local, regional and national cultures, which influence the way how people work, cooperate, think and create. Thus, cultural factors also influence the attitude towards cultural heritage and natural resources and the way, how these resources are used for the creation of tourism products. This leads to differences in the development of tourism products on several levels: 1. In the selection of cultural heritage and natural resources for the product development process 2. In the processes, how tourism products are created 3. In the way, how providers and marketing organisations work with tourism products based on cultural heritage or natural resources. Aim of this paper is to discover differences in these dimensions by analysing and comparing examples of tourism products in Turkey and Austria, both countries with a highly developed, high professional tourism industry and rich experience of stakeholders in tourism industry in the field of product development and marketing. The cases are selected from the following fields: + Cultural tourism / heritage tourism + City tourism + Industrial heritage tourism + Nature and outdoor tourism + Health tourism The cases are analysed based on available secondary data (as several cases are scientifically described) and expert interviews with local and regional stakeholders of tourism industry and tourism experts. The available primary and secondary data will be analysed and displayed in a comparative structure that allows to derive answers to the above stated research question. The result of the project therefore will be a more precise picture about the influence of cultural differences on the use and exploitation of resources in the field of tourism that allows developing recommendations for tourism industry, which must be taken into consideration to assure cultural and natural resources are treated in a sustainable and responsible way. The authors will edit these culture-cross recommendations in form of a ‘check-list’ that can be used as a ‘guideline’ for tourism professionals in the field of product development and marketing and therefore connects theoretical research to the field of practical application and closes the gap between academic research and the field of tourism practice.

Keywords: cultural heritage, natural resources, Austria, Turkey

Procedia PDF Downloads 488
859 Examining the Links between Fish Behaviour and Physiology for Resilience in the Anthropocene

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

Changes in behaviour and physiology are the most important responses of marine life to anthropogenic impacts such as climate change and over-fishing. Behavioural changes (such as a shift in distribution or changes in phenology) can ensure that a species remains in an environment suited for its optimal physiological performance. However, if marine life is unable to shift their distribution, they are reliant on physiological adaptation (either by broadening their metabolic curves to tolerate a range of stressors or by shifting their metabolic curves to maximize their performance at extreme stressors). However, since there are links between fish physiology and behaviour, changes to either of these traits may have reciprocal interactions. This paper reviews the current knowledge of the links between the behaviour and physiology of fishes, discusses these in the context of exploitation and climate change, and makes recommendations for future research needs. The review revealed that our understanding of the links between fish behaviour and physiology is rudimentary. However, both are hypothesized to be linked to stress responses along the hypothalamic pituitary axis. The link between physiological capacity and behaviour is particularly important as both determine the response of an individual to a changing climate and are under selection by fisheries. While it appears that all types of capture fisheries are likely to reduce the adaptive potential of fished populations to climate stressors, angling, which is primarily associated with recreational fishing, may induce fission of natural populations by removing individuals with bold behavioural traits and potentially the physiological traits required to facilitate behavioural change. Future research should focus on assessing how the links between physiological capacity and behaviour influence catchability, the response to climate change drivers, and post-release recovery. The plasticity of phenotypic traits should be examined under a range of stressors of differing intensity in several species and life history stages. Future studies should also assess plasticity (fission or fusion) in the phenotypic structuring of social hierarchy and how this influences habitat selection. Ultimately, to fully understand how physiology is influenced by the selective processes driven by fisheries, long-term monitoring of the physiological and behavioural structure of fished populations, their fitness, and catch rates are required.

Keywords: climate change, metabolic shifts, over-fishing, phenotypic plasticity, stress response

Procedia PDF Downloads 116
858 Technical and Economic Potential of Partial Electrification of Railway Lines

Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong

Abstract:

Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.

Keywords: electrification, hybrid, railway, storage

Procedia PDF Downloads 426
857 Synthesis, Physicochemical Characterization and Study of the Antimicrobial Activity of Chlorobutanol

Authors: N. Hadhoum, B. Guerfi, T. M. Sider, Z. Yassa, T. Djerboua, M. Boursouti, M. Mamou, F. Z. Hadjadj Aoul, L. R. Mekacher

Abstract:

Introduction and objectives: Chlorobutanol is a raw material, mainly used as an antiseptic and antimicrobial preservative in injectable and ophthalmic preparations. The main objective of our study was the synthesis and evaluation of the antimicrobial activity of chlorobutanol hemihydrates. Material and methods: Chlorobutanol was synthesized according to the nucleophilic addition reaction of chloroform to acetone, identified by an infrared absorption using Spectrum One FTIR spectrometer, melting point, Scanning electron microscopy and colorimetric reactions. The dosage of carvedilol active substance was carried out by assaying the degradation products of chlorobutanol in a basic solution. The chlorobutanol obtained was subjected to bacteriological tests in order to study its antimicrobial activity. The antibacterial activity was evaluated against strains such as Escherichia coli (ATCC 25 922), Staphylococcus aureus (ATCC 25 923) and Pseudomonas aeroginosa (ATCC = American type culture collection). The antifungal activity was evaluated against human pathogenic fungal strains, such as Candida albicans and Aspergillus niger provided by the parasitology laboratory of the Hospital of Tizi-Ouzou, Algeria. Results and discussion: Chlorobutanol was obtained in an acceptable yield. The characterization tests of the product obtained showed a white and crystalline appearance (confirmed by scanning electron microscopy), solubilities (in water, ethanol and glycerol), and a melting temperature in accordance with the requirements of the European pharmacopoeia. The colorimetric reactions were directed towards the presence of a trihalogenated carbon and an alcohol function. The spectral identification (IR) showed the presence of characteristic chlorobutanol peaks and confirmed the structure of the latter. The microbiological study revealed an antimicrobial effect on all strains tested (Sataphylococcus aureus (MIC = 1250 µg/ml), E. coli (MIC = 1250 µg/ml), Pseudomonas aeroginosa (MIC = 1250 µg/ml), Candida albicans (MIC =2500 µg/ml), Aspergillus niger (MIC =2500 µg/ml)) with MIC values close to literature data. Conclusion: Thus, on the whole, the synthesized chlorobutanol satisfied the requirements of the European Pharmacopoeia, and possesses antibacterial and antifungal activity; nevertheless, it is necessary to insist on the purification step of the product in order to eliminate the maximum impurities.

Keywords: antimicrobial agent, bacterial and fungal strains, chlorobutanol, MIC, minimum inhibitory concentration

Procedia PDF Downloads 167
856 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast

Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef

Abstract:

This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.

Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast

Procedia PDF Downloads 130
855 Monitoring of Quantitative and Qualitative Changes in Combustible Material in the Białowieża Forest

Authors: Damian Czubak

Abstract:

The Białowieża Forest is a very valuable natural area, included in the World Natural Heritage at UNESCO, where, due to infestation by the bark beetle (Ips typographus), norway spruce (Picea abies) have deteriorated. This catastrophic scenario led to an increase in fire danger. This was due to the occurrence of large amounts of dead wood and grass cover, as light penetrated to the bottom of the stands. These factors in a dry state are materials that favour the possibility of fire and the rapid spread of fire. One of the objectives of the study was to monitor the quantitative and qualitative changes of combustible material on the permanent decay plots of spruce stands from 2012-2022. In addition, the size of the area with highly flammable vegetation was monitored and a classification of the stands of the Białowieża Forest by flammability classes was made. The key factor that determines the potential fire hazard of a forest is combustible material. Primarily its type, quantity, moisture content, size and spatial structure. Based on the inventory data on the areas of forest districts in the Białowieża Forest, the average fire load and its changes over the years were calculated. The analysis was carried out taking into account the changes in the health status of the stands and sanitary operations. The quantitative and qualitative assessment of fallen timber and fire load of ground cover used the results of the 2019 and 2021 inventories. Approximately 9,000 circular plots were used for the study. An assessment was made of the amount of potential fuel, understood as ground cover vegetation and dead wood debris. In addition, monitoring of areas with vegetation that poses a high fire risk was conducted using data from 2019 and 2021. All sub-areas were inventoried where vegetation posing a specific fire hazard represented at least 10% of the area with species characteristic of that cover. In addition to the size of the area with fire-prone vegetation, a very important element is the size of the fire load on the indicated plots. On representative plots, the biomass of the land cover was measured on an area of 10 m2 and then the amount of biomass of each component was determined. The resulting element of variability of ground covers in stands was their flammability classification. The classification developed made it possible to track changes in the flammability classes of stands over the period covered by the measurements.

Keywords: classification, combustible material, flammable vegetation, Norway spruce

Procedia PDF Downloads 91
854 Durham Region: How to Achieve Zero Waste in a Municipal Setting

Authors: Mirka Januszkiewicz

Abstract:

The Regional Municipality of Durham is the upper level of a two-tier municipal and regional structure comprised of eight lower-tier municipalities. With a population of 655,000 in both urban and rural settings, the Region is approximately 2,537 square kilometers neighboring the City of Toronto, Ontario Canada to the east. The Region has been focused on diverting waste from disposal since the development of its Long Term Waste Management Strategy Plan for 2000-2020. With a 54 percent solid waste diversion rate, the focus now is on achieving 70 percent diversion on the path to zero waste using local waste management options whenever feasible. The Region has an Integrated Waste Management System that consists of a weekly curbside collection of recyclable printed paper and packaging and source separated organics; a seasonal collection of leaf and yard waste; a bi-weekly collection of residual garbage; and twice annual collection of intact, sealed household batteries. The Region also maintains three Waste Management Facilities for residential drop-off of household hazardous waste, polystyrene, construction and demolition debris and electronics. Special collection events are scheduled in the spring, summer and fall months for reusable items, household hazardous waste, and electronics. The Region is in the final commissioning stages of an energy from the waste facility for residual waste disposal that will recover energy from non-recyclable wastes. This facility is state of the art and is equipped for installation of carbon capture technology in the future. Despite all of these diversion programs and efforts, there is still room for improvement. Recent residential waste studies revealed that over 50% of the residual waste placed at the curb that is destined for incineration could be recycled. To move towards a zero waste community, the Region is looking to more advanced technologies for extracting the maximum recycling value from residential waste. Plans are underway to develop a pre-sort facility to remove organics and recyclables from the residual waste stream, including the growing multi-residential sector. Organics would then be treated anaerobically to generate biogas and fertilizer products for beneficial use within the Region. This project could increase the Region’s diversion rate beyond 70 percent and enhance the Region’s climate change mitigation goals. Zero waste is an ambitious goal in a changing regulatory and economic environment. Decision makers must be willing to consider new and emerging technologies and embrace change to succeed.

Keywords: municipal waste, residential, waste diversion, zero waste

Procedia PDF Downloads 218
853 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods

Authors: Thabo M. Bafitlhile, Adewole Oladele

Abstract:

Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.

Keywords: drainage, estimate, evaluation, floods, flood forecasting

Procedia PDF Downloads 370
852 Factors Controlling Marine Shale Porosity: A Case Study between Lower Cambrian and Lower Silurian of Upper Yangtze Area, South China

Authors: Xin Li, Zhenxue Jiang, Zhuo Li

Abstract:

Generally, shale gas is trapped within shale systems with low porosity and ultralow permeability as free and adsorbing states. Its production is controlled by properties, in terms of occurrence phases, gas contents, and percolation characteristics. These properties are all influenced by porous features. In this paper, porosity differences of marine shales were explored between Lower Cambrian shale and Lower Silurian shale of Sichuan Basin, South China. Both the two shales were marine shales with abundant oil-prone kerogen and rich siliceous minerals. Whereas Lower Cambrian shale (3.56% Ro) possessed a higher thermal degree than that of Lower Silurian shale (2.31% Ro). Samples were measured by a combination of organic-chemistry geology measurement, organic matter (OM) isolation, X-ray diffraction (XRD), N2 adsorption, and focused ion beam milling and scanning electron microscopy (FIB-SEM). Lower Cambrian shale presented relatively low pore properties, with averaging 0.008ml/g pore volume (PV), averaging 7.99m²/g pore surface area (PSA) and averaging 5.94nm average pore diameter (APD). Lower Silurian shale showed as relatively high pore properties, with averaging 0.015ml/g PV, averaging 10.53m²/g PSA and averaging 18.60nm APD. Additionally, fractal analysis indicated that the two shales presented discrepant pore morphologies, mainly caused by differences in the combination of pore types between the two shales. More specifically, OM-hosted pores with pin-hole shape and dissolved pores with dead-end openings were the main types in Lower Cambrian shale, while OM-hosted pore with a cellular structure was the main type in Lower Silurian shale. Moreover, porous characteristics of isolated OM suggested that OM of Lower Silurian shale was more capable than that of Lower Cambrian shale in the aspect of pore contribution. PV of isolated OM in Lower Silurian shale was almost 6.6 times higher than that in Lower Cambrian shale, and PSA of isolated OM in Lower Silurian shale was almost 4.3 times higher than that in Lower Cambrian shale. However, no apparent differences existed among samples with various matrix compositions. At late diagenetic or metamorphic epoch, extensive diagenesis overprints the effects of minerals on pore properties and OM plays the dominant role in pore developments. Hence, differences of porous features between the two marine shales highlight the effect of diagenetic degree on OM-hosted pore development. Consequently, distinctive pore characteristics may be caused by the different degrees of diagenetic evolution, even with similar matrix basics.

Keywords: marine shale, lower Cambrian, lower Silurian, om isolation, pore properties, om-hosted pore

Procedia PDF Downloads 132
851 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 144
850 A Professional Learning Model for Schools Based on School-University Research Partnering That Is Underpinned and Structured by a Micro-Credentialing Regime

Authors: David Lynch, Jake Madden

Abstract:

There exists a body of literature that reports on the many benefits of partnerships between universities and schools, especially in terms of teaching improvement and school reform. This is because such partnerships can build significant teaching capital, by deepening and expanding the skillsets and mindsets needed to create the connections that support ongoing and embedded teacher professional development and career goals. At the same time, this literature is critical of such initiatives when the partnership outcomes are short- term or one-sided, misaligned to fundamental problems, and not expressly focused on building the desired teaching capabilities. In response to this situation, research conducted by Professor David Lynch and his TeachLab research team, has begun to shed light on the strengths and limitations of school/university partnerships, via the identification of key conceptual elements that appear to act as critical partnership success factors. These elements are theorised as an inter-play between professional knowledge acquisition, readiness, talent management and organisational structure. However, knowledge of how these elements are established, and how they manifest within the school and its teaching workforce as an overall system, remains incomplete. Therefore, research designed to more clearly delineate these elements in relation to their impact on school/university partnerships is thus required. It is within this context that this paper reports on the development and testing of a Professional Learning (PL) model for schools and their teachers that incorporates school-university research partnering within a systematic, whole-of-school PL strategy that is underpinned and structured by a micro-credentialing (MC) regime. MC involves learning a narrow-focused certificate (a micro-credential) in a specific topic area (e.g., 'How to Differentiate Instruction for English as a second language Students') and embedded in the teacher’s day-to-day teaching work. The use of MC is viewed as important to the efficacy and sustainability of teacher PL because it (1) provides an evidence-based framework for teacher learning, (2) has the ability to promote teacher social capital and (3) engender lifelong learning in keeping professional skills current in an embedded and seamless to work manner. The associated research is centred on a primary school in Australia (P-6) that acted as an arena to co-develop, test/investigate and report on outcomes for teacher PL that uses MC to support a whole-of-school partnership with a university.

Keywords: teaching improvement, teacher professional learning, talent management, education partnerships, school-university research

Procedia PDF Downloads 80
849 Attitudes Towards the Supernatural in Benjamin Britten’s The Turn of the Screw

Authors: Yaou Zhang

Abstract:

Background: Relatively little scholarly attention has been paid to the production of Benjamin Britten’s chamber opera The Turn of the Screw. As one of Britten’s most remarkable operas. The story of the libretto was from Henry James’s novella of the same name. The novella was created in 1898 and one of the primary questions addressed to people in the story is “how real the ghosts are,” which leads the story to a huge ambiguity in readers’ minds. Aims: This research focuses on the experience of seeing the opera on stage over several decades. This study of opera productions over time not only provides insight into how stage performances can alter audience members' perceptions of the opera in the present but also reveals a landscape of shifting aesthetics and receptions. Methods: To examine the hypotheses in interpretation and reception, the qualitative analysis is used to examine the figures of ghosts in different productions across the time from 1954 to 2021 in the UK: by accessing recordings, newspapers, and reviews for the productions that are sourced from online and physical archives. For instance, the field research is conducted on the topic by arranging interviews with the creative team and visiting Opera North in Leeds and Britten-Pears Foundation. The collected data reveals the “hidden identity” in creative teams’ interpretations, social preferences, and rediscover that have previously remained unseen. Results: This research presents an angle of Britten’s Screw by using the third position; it shows how the attention moved from the stage of “do the ghosts really exist” to “traumatised children.” Discussion: Critics and audiences have debated whether the governess hallucinates the ghosts in the opera for decades. While, in recent years, directors of new productions have given themselves the opportunity to go deeper into Britten's musical structure and offer the opera more space to be interpreted, rather than debating if "ghosts actually exist" or "the psychological problems of the governess." One can consider and reflect that the questionable actions of the children are because they are suffering from trauma, whether the trauma comes from the ghosts, the hallucinating governess, or some prior experiences: various interpretations cause one result that children are the recipients of trauma. Arguably, the role of the supernatural is neither simply one of the elements of a ghost story nor simply one of the parts of the ambiguity between the supernatural and the hallucination of the governess; rather, the ghosts and the hallucinating governess can exist at the same time - the combination of the supernatural’s and the governess’s behaviours on stage generates a sharper and more serious angle that draws our attention to the traumatized children.

Keywords: benjamin britten, chamber opera, production, reception, staging, the turn of the screw

Procedia PDF Downloads 107
848 Assessment of Drinking Water Contamination from the Water Source to the Consumer in Palapye Region, Botswana

Authors: Tshegofatso Galekgathege

Abstract:

Poor water quality is of great concern to human health as it can cause disease outbreaks. A standard practice today, in developed countries, is that people should be provided with safe-reliable drinking water, as safe drinking water is recognized as a basic human right and a cost effective measure of reducing diseases. Over 1.1 billion people worldwide lack access to a safe water supply and as a result, the majority are forced to use polluted surface or groundwater. It is widely accepted that our water supply systems are susceptible to the intentional or accidental contamination .Water quality degradation may occur anywhere in the path that water takes from the water source to the consumer. Chlorine is believed to be an effective tool in disinfecting water, but its concentration may decrease with time due to consumption by chemical reactions. This shows that we are at the risk of being infected by waterborne diseases if chlorine in water falls below the required level of 0.2-1mg/liter which should be maintained in water and some contaminants enter into the water distribution system. It is believed that the lack of adequate sanitation also contributes to the contamination of water globally. This study therefore, assesses drinking water contamination from the source to the consumer by identifying the point vulnerable to contamination from the source to the consumer in the study area .To identify the point vulnerable to contamination, water was sampled monthly from boreholes, water treatment plant, water distribution system (WDS), service reservoirs and consumer taps from all the twenty (20) villages of Palapye region. Sampled water was then taken to the laboratory for testing and analysis of microbiological and chemical parameters. Water quality analysis were then compared with Botswana drinking water quality standards (BOS32:2009) to see if they comply. Major sources of water contamination identified during site visits were the livestock which were found drinking stagnant water from leaking pipes in 90 percent of the villages. Soils structure around the area was negatively affected because of livestock movement even vegetation in the area. In conclusion microbiological parameters of water in the study area do not comply with drinking water standards, some microbiological parameters in water indicated that livestock do not only affect land degradation but also the quality of water. Chlorine has been applied to water over some years but it is not effective enough thus preventative measures have to be developed, to prevent contaminants from reaching water. Remember: Prevention is better than cure.

Keywords: land degradation, leaking systems, livestock, water contamination

Procedia PDF Downloads 350
847 Building up Regional Innovation Systems (RIS) for Development: The Case Study of the State of Mexico, México

Authors: Jose Luis Solleiro, Rosario Castanon, Laura Elena Martinez

Abstract:

The State of Mexico is an administrative entity of Mexico, and it is one of the most important territories due to its great economic and social impact for the whole country, especially since it contributes with more than eight of the national Gross Domestic Product (GDP). The State of Mexico has a population of over seventeen million people and host very important business and productive industries such as Automotive, Chemicals, Pharmaceutical, and Agri-food. In 2017, the State Development Plan (Plan Estatal de Desarrollo in Spanish) which is a policy document that rules State's economic actions and integrates the bases for sectoral and regional programs to achieve regional development), raised innovation as a key aspect to boost competitiveness and productivity of the State of Mexico. Therefore, in line with this proposal, in 2018 the Mexican Council for Science and Technology (COMECYT for its acronym in Spanish), an institution in charge of promoting public science and technology policies in the State of Mexico, took actions towards building up the State´s Innovation System. Hence, the main objective of this paper is to review and analyze the process to create RIS in the State of Mexico. We focus on the key elements of the process, the diverse actors that were involved in it, the activities that were carried out and the identification of the challenges, findings, successes, and failures of the intended exercise. The methodology used to analyze the structure of the Innovation System of the State of Mexico is based on two elements: the case study and the research-action approach. The main objective of the paper, the case study was based on semi-structured interviews with key actors who have participated in the process of launching the RIS of the State of Mexico. Additionally, we analyzed the information reports and other documents that were elaborated during the process of shaping the State's innovation system. Finally, the results obtained in the process were also examined. The relevance of this investigation fundamentally rests in two elements: 1) keeping documental record of the process of building a RIS in Mexico; and 2) carrying out the analysis of this case study recognizing the importance of knowledge extraction and dissemination, so that lessons on this matter may be useful for similar experiences in the future. We conclude that in Mexico, documentation and analysis efforts related to the formation of RIS and interaction processes between innovation ecosystem actors are scarce, so documents like are of great importance, especially since it generates a series of findings and recommendations for the building of RIS.

Keywords: regional innovation systems, innovation, development, competitiveness

Procedia PDF Downloads 116
846 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 141
845 Colonization of Non-Planted Mangrove Species in the “Rehabilitation of Aquaculture Ponds to Mangroves” Projects in China

Authors: Yanmei Xiong, Baowen Liao, Kun Xin, Zhongmao Jiang, Hao Guo, Yujun Chen, Mei Li

Abstract:

Conversion of mangroves to aquaculture ponds represented as one major reason for mangrove loss in Asian countries in the 20th century. Recently the Chinese government has set a goal to increase 48,650 ha (more than the current mangrove area) of mangroves before the year of 2025 and “rehabilitation of aquaculture ponds to mangroves” projects are considered to be the major pathway to increase the mangrove area of China. It remains unclear whether natural colonization is feasible and what are the main influencing factors for mangrove restoration in these projects. In this study, a total of 17 rehabilitation sites in Dongzhai Bay, Hainan, China were surveyed for vegetation, soil and surface elevation five years after the rehabilitation project was initiated. Colonization of non-planted mangrove species was found at all sites and non-planted species dominated over planted species at 14 sites. Mangrove plants could only be found within the elevation range of -20 cm to 65 cm relative to the mean sea level. Soil carbon and nitrogen contents of the top 20 cm were generally low, ranging between 0.2%–1.4% and 0.03%–0.09%, respectively, and at each site, soil carbon and nitrogen were significantly lower at elevations with mangrove plants than lower elevations without mangrove plants. Seven sites located at the upper stream of river estuaries, where soil salinity was relatively lower, and nutrient was relatively higher, was dominated by non-planted Sonneratia caseolaris. Seven sites located at the down-stream of river estuaries or in the inner part of the bay, where soil salinity and nutrient were intermediate, were dominated by non-planted alien Sonneratia apetala. Another three sites located at the outer part of the bay, where soil salinity was higher and nutrient was lower, were dominated by planted species (Rhizophora stylosa, Kandelia obovata, Aegiceras corniculatum and Bruguiera sexangula) with non-planted S. apetala and Avicennia marina also found. The results suggest that natural colonization of mangroves is feasible in pond rehabilitation projects given the rehabilitation of tidal activities and appropriate elevations. Surface elevation is the major determinate for the success of mangrove rehabilitation, and soil salinity and nutrients are important in shaping vegetation structure. The colonization and dominance of alien species (Sonneratia apetala in this case) in some rehabilitation sites poses invasion risks and thus cautions should be taken when introducing alien mangrove species.

Keywords: coastal wetlands, ecological restoration, mangroves, natural colonization, shrimp pond rehabilitation, wetland restoration

Procedia PDF Downloads 133
844 Further Development of Offshore Floating Solar and Its Design Requirements

Authors: Madjid Karimirad

Abstract:

Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.

Keywords: floating solar, offshore renewable energy, wind and wave loading, design space

Procedia PDF Downloads 77
843 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others

Authors: Randall E. Osborne

Abstract:

In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.

Keywords: globalization, international psychology, society and culture, teaching interculturally

Procedia PDF Downloads 250
842 Rheological Study of Chitosan/Montmorillonite Nanocomposites: The Effect of Chemical Crosslinking

Authors: K. Khouzami, J. Brassinne, C. Branca, E. Van Ruymbeke, B. Nysten, G. D’Angelo

Abstract:

The development of hybrid organic-inorganic nanocomposites has recently attracted great interest. Typically, polymer silicates represent an emerging class of polymeric nanocomposites that offer superior material properties compared to each compound alone. Among these materials, complexes based on silicate clay and polysaccharides are one of the most promising nanocomposites. The strong electrostatic interaction between chitosan and montmorillonite can induce what is called physical hydrogel, where the coordination bonds or physical crosslinks may associate and dissociate reversibly and in a short time. These mechanisms could be the main origin of the uniqueness of their rheological behavior. However, owing to their structure intrinsically heterogeneous and/or the lack of dissipated energy, they are usually brittle, possess a poor toughness and may not have sufficient mechanical strength. Consequently, the properties of these nanocomposites cannot respond to some requirements of many applications in several fields. To address the issue of weak mechanical properties, covalent chemical crosslink bonds can be introduced to the physical hydrogel. In this way, quite homogeneous dually crosslinked microstructures with high dissipated energy and enhanced mechanical strength can be engineered. In this work, we have prepared a series of chitosan-montmorillonite nanocomposites chemically crosslinked by addition of poly (ethylene glycol) diglycidyl ether. This study aims to provide a better understanding of the mechanical behavior of dually crosslinked chitosan-based nanocomposites by relating it to their microstructures. In these systems, the variety of microstructures is obtained by modifying the number of cross-links. Subsequently, a superior uniqueness of the rheological properties of chemically crosslinked chitosan-montmorillonite nanocomposites is achieved, especially at the highest percentage of clay. Their rheological behaviors depend on the clay/chitosan ratio and the crosslinking. All specimens exhibit a viscous rheological behavior over the frequency range investigated. The flow curves of the nanocomposites show a Newtonian plateau at very low shear rates accompanied by a quite complicated nonlinear decrease with increasing the shear rate. Crosslinking induces a shear thinning behavior revealing the formation of network-like structures. Fitting shear viscosity curves via Ostward-De Waele equation disclosed that crosslinking and clay addition strongly affect the pseudoplasticity of the nanocomposites for shear rates γ ̇>20.

Keywords: chitosan, crossliking, nanocomposites, rheological properties

Procedia PDF Downloads 144
841 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials

Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar

Abstract:

Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.

Keywords: body armor, impact resistance, Kevlar, shear thickening fluid

Procedia PDF Downloads 238
840 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 152
839 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 176
838 Triazenes: Unearthing Their Hidden Arsenal Against Malaria and Microbial Menace

Authors: Frans J. Smit, Wisdom A. Munzeiwa, Hermanus C. M. Vosloo, Lyn-Marie Birkholtz, Richard K. Haynes

Abstract:

Malaria and antimicrobial infections remain significant global health concerns, necessitating the continuous search for novel therapeutic approaches. This abstract presents an overview of the potential use of triazenes as effective agents against malaria and various antimicrobial pathogens. Triazenes are a class of compounds characterized by a linear arrangement of three nitrogen atoms, rendering them structurally distinct from their cyclic counterparts. This study investigates the efficacy of triazenes against malaria and explores their antimicrobial activity. Preliminary results revealed significant antimalarial activity of the triazenes, as evidenced by in vitro screening against P. falciparum, the causative agent of malaria. Furthermore, the compounds exhibited broad-spectrum antimicrobial activity, indicating their potential as effective antimicrobial agents. These compounds have shown inhibitory effects on various essential enzymes and processes involved in parasite survival, replication, and transmission. The mechanism of action of triazenes against malaria involves interactions with critical molecular targets, such as enzymes involved in the parasite's metabolic pathways and proteins responsible for host cell invasion. The antimicrobial activity of the triazenes against bacteria and fungi was investigated through disc diffusion screening. The antimicrobial efficacy of triazenes has been observed against both Gram-positive and Gram-negative bacteria, as well as multidrug-resistant strains, making them potential candidates for combating drug-resistant infections. Furthermore, triazenes possess favourable physicochemical properties, such as good stability, solubility, and low toxicity, which are essential for drug development. The structural versatility of triazenes allows for the modification of their chemical composition to enhance their potency, selectivity, and pharmacokinetic properties. These modifications can be tailored to target specific pathogens, increasing the potential for personalized treatment strategies. In conclusion, this study highlights the potential of triazenes as promising candidates for the development of novel antimalarial and antimicrobial therapeutics. Further investigations are necessary to determine the structure-activity relationships and optimize the pharmacological properties of these compounds. The results warrant additional research, including MIC studies, to further explore the antimicrobial activity of the triazenes. Ultimately, these findings contribute to the development of more effective strategies for combating malaria and microbial infections.

Keywords: malaria, anti-microbials, triazene, resistance

Procedia PDF Downloads 100
837 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 434
836 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 448
835 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 88
834 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 533
833 Purchasing Decision-Making in Supply Chain Management: A Bibliometric Analysis

Authors: Ahlem Dhahri, Waleed Omri, Audrey Becuwe, Abdelwahed Omri

Abstract:

In industrial processes, decision-making ranges across different scales, from process control to supply chain management. The purchasing decision-making process in the supply chain is presently gaining more attention as a critical contributor to the company's strategic success. Given the scarcity of thorough summaries in the prior studies, this bibliometric analysis aims to adopt a meticulous approach to achieve quantitative knowledge on the constantly evolving subject of purchasing decision-making in supply chain management. Through bibliometric analysis, we examine a sample of 358 peer-reviewed articles from the Scopus database. VOSviewer and Gephi software were employed to analyze, combine, and visualize the data. Data analytic techniques, including citation network, page-rank analysis, co-citation, and publication trends, have been used to identify influential works and outline the discipline's intellectual structure. The outcomes of this descriptive analysis highlight the most prominent articles, authors, journals, and countries based on their citations and publications. The findings from the research illustrate an increase in the number of publications, exhibiting a slightly growing trend in this field. Co-citation analysis coupled with content analysis of the most cited articles identified five research themes mentioned as follows integrating sustainability into the supplier selection process, supplier selection under disruption risks assessment and mitigation strategies, Fuzzy MCDM approaches for supplier evaluation and selection, purchasing decision in vendor problems, decision-making techniques in supplier selection and order lot sizing problems. With the help of a graphic timeline, this exhaustive map of the field illustrates a visual representation of the evolution of publications that demonstrate a gradual shift from research interest in vendor selection problems to integrating sustainability in the supplier selection process. These clusters offer insights into a wide variety of purchasing methods and conceptual frameworks that have emerged; however, they have not been validated empirically. The findings suggest that future research would emerge with a greater depth of practical and empirical analysis to enrich the theories. These outcomes provide a powerful road map for further study in this area.

Keywords: bibliometric analysis, citation analysis, co-citation, Gephi, network analysis, purchasing, SCM, VOSviewer

Procedia PDF Downloads 84
832 The Principal-Agent Model with Moral Hazard in the Brazilian Innovation System: The Case of 'Lei do Bem'

Authors: Felippe Clemente, Evaldo Henrique da Silva

Abstract:

The need to adopt some type of industrial policy and innovation in Brazil is a recurring theme in the discussion of public interventions aimed at boosting economic growth. For many years, the country has adopted various policies to change its productive structure in order to increase the participation of sectors that would have the greatest potential to generate innovation and economic growth. Only in the 2000s, tax incentives as a policy to support industrial and technological innovation are being adopted in Brazil as a phenomenon associated with rates of productivity growth and economic development. In this context, in late 2004 and 2005, Brazil reformulated its institutional apparatus for innovation in order to approach the OECD conventions and the Frascati Manual. The Innovation Law (2004) and the 'Lei do Bem' (2005) reduced some institutional barriers to innovation, provided incentives for university-business cooperation, and modified access to tax incentives for innovation. Chapter III of the 'Lei do Bem' (no. 11,196/05) is currently the most comprehensive fiscal incentive to stimulate innovation. It complies with the requirements, which stipulates that the Union should encourage innovation in the company or industry by granting tax incentives. With its introduction, the bureaucratic procedure was simplified by not requiring pre-approval of projects or participation in bidding documents. However, preliminary analysis suggests that this instrument has not yet been able to stimulate the sector diversification of these investments in Brazil, since its benefits are mostly captured by sectors that already developed this activity, thus showing problems with moral hazard. It is necessary, then, to analyze the 'Lei do Bem' to know if there is indeed the need for some change, investigating what changes should be implanted in the Brazilian innovation policy. This work, therefore, shows itself as a first effort to analyze a current national problem, evaluating the effectiveness of the 'Lei do Bem' and suggesting public policies that help and direct the State to the elaboration of legislative laws capable of encouraging agents to follow what they describes. As a preliminary result, it is known that 130 firms used fiscal incentives for innovation in 2006, 320 in 2007 and 552 in 2008. Although this number is on the rise, it is still small, if it is considered that there are around 6 thousand firms that perform Research and Development (R&D) activities in Brazil. Moreover, another obstacle to the 'Lei do Bem' is the percentages of tax incentives provided to companies. These percentages reveal a significant sectoral correlation between R&D expenditures of large companies and R&D expenses of companies that accessed the 'Lei do Bem', reaching a correlation of 95.8% in 2008. With these results, it becomes relevant to investigate the law's ability to stimulate private investments in R&D.

Keywords: brazilian innovation system, moral hazard, R&D, Lei do Bem

Procedia PDF Downloads 337
831 Enhanced Photocatalytic Activities of TiO2/Ag2O Heterojunction Nanotubes Arrays Obtained by Electrochemical Method

Authors: Magdalena Diaka, Paweł Mazierski, Joanna Żebrowska, Michał Winiarski, Tomasz Klimczuk, Adriana Zaleska-Medynska

Abstract:

During the last years, TiO2 nanotubes have been widely studied due to their unique highly ordered array structure, unidirectional charge transfer and higher specific surface area compared to conventional TiO2 powder. These photoactive materials, in the form of thin layer, can be activated by low powered and low cost irradiation sources (such as LEDs) to remove VOCs, microorganism and to deodorize air streams. This is possible due to their directly growth on a support material and high surface area, which guarantee enhanced photon absorption together with an extensive adsorption of reactant molecules on the photocatalyst surface. TiO2 nanotubes exhibit also lots of other attractive properties, such as potential enhancement of electron percolation pathways, light conversion, and ion diffusion at the semiconductor-electrolyte interface. Pure TiO2 nanotubes were previously used to remove organic compounds from the gas phase as well as in water splitting reaction. The major factors limiting the use of TiO2 nanotubes, which have not been fully overcome, are their relatively large band gap (3-3,2 eV) and high recombination rate of photogenerated electron–hole pairs. Many different strategies were proposed to solve this problem, however titania nanostructures containing incorporated metal oxides like Ag2O shows very promising, new optical and photocatalytic properties. Unfortunately, there is still very limited number of reports regarding application of TiO2/MxOy nanostructures. In the present work, we prepared TiO2/Ag2O nanotubes obtained by anodization of Ti-Ag alloys containing 5, 10 and 15 wt. % Ag. Photocatalysts prepared in this way were characterized by X-ray diffraction spectroscopy (XRD), scanning electron microscopy (SEM), luminescence spectroscopy and UV-Vis spectroscopy. The activities of new TiO2/Ag2O were examined by photocatalytic degradation of toluene in gas phase reaction and phenol in aqueous phase using 1000 W Xenon lamp (Oriel) and light emitting diodes (LED) as a irradiation sources. Additionally efficiency of bacteria (Pseudomonas aeruginosa) removal from the gas phase was estimated. The number of surviving bacteria was determined by the serial twofold dilution microtiter plate method, in Tryptic Soy Broth medium (TSB, GibcoBRL).

Keywords: photocatalysis, antibacterial properties, titania nanotubes, new TiO2/MxOy nanostructures

Procedia PDF Downloads 291